\\n));\\n\\nfunction Parent() {\\nconst inputRef = useRef(null);\\n\\nconst focusInput = () => {\\ninputRef.current.focus();\\n};\\n\\nreturn (\\n<>\\n<FancyInput ref={inputRef} placeholder=\\"Focus me with the button\\" />\\n<button onClick={focusInput}>Focus Input</button>\\n</>\\n);\\n}\\n
forwardRef
important?forwardRef
allows for a more flexible and efficient component composition. When working with complex applications, there are cases where you need direct access to a child component’s DOM element or instance from a parent component. However, React’s default behavior doesn’t always allow this, especially when dealing with higher-order components (HOCs) or wrapper components.
By using forwardRef
, you can pass a reference from a parent component to a child component, even if that child component is wrapped inside another component. This enables the parent component to directly interact with the child’s DOM element or instance.
To understand ref forwarding, we must first understand what refs are. Refs are a way to access and interact with a DOM element directly. Refs allow you to bypass the typical React data flow and perform actions not achievable with props and state alone.
\\nRefs are often used for tasks like setting focus on an input field, measuring the dimensions of an element, or triggering animations. For instance, you can use refs to give focus on an input field when a button is clicked:
\\nimport * as React from \\"react\\";\\nimport ReactDOM from \\"react-dom\\";\\n\\nexport default function App() {\\n const ref = React.useRef();\\n\\n function focus() {\\n ref.current.focus();\\n }\\n\\n return (\\n <div className=\\"App\\">\\n <input ref={ref} placeholder=\\"my input\\" />\\n <button onClick={focus}>Focus</button>\\n </div>\\n );\\n}\\n\\nconst rootElement = document.getElementById(\\"root\\");\\nReactDOM.render(<App />, rootElement);\\n\\n
Similarly, we could use JavaScript to achieve a similar effect. However, it is not recommended to do this, and it is even marked as a bad practice to access the DOM directly when using React. The vanilla JavaScript equivalent to focusing an element could look like the following code snippet:
\\ndocument.getElementById(\'myInput\').focus()\\n\\n
Many refs can be pointed to using forwardRef
. In React, it’s generally recommended to use props and state to manage your component data flow. However, there are some situations where using refs can be helpful or even necessary. Here are some common use cases for refs in React:
While refs are powerful tools, they should be used sparingly and only when necessary. Excessive use of refs can lead to code that is harder to understand and maintain. Always opt to use props and state for data flow in React components when possible.
\\nIn this section, we will focus specifically on working with refs in class components. Although React has moved towards functional components with React Hooks, it is still important to understand how to manage refs in class components, as they remain prevalent in many existing projects.
\\nWe will cover the process of creating, attaching, and using refs in class components, along with examples that illustrate common use cases. This knowledge will enable you to use refs effectively in class components and facilitate a smoother transition to functional components and Hooks when needed.
\\nTo create a ref, React provides a function called React.createRef
. Once created, they can be attached to React elements via the ref
attribute. When a component is constructed, refs get assigned to instance properties of that component, ensuring that they can be referenced anywhere in the component. Here’s what that looks like:
class MyComponent extends React.Component {\\n constructor(props) {\\n super(props);\\n this.newRef = React.createRef(); //newRef is now available for use throughout our component\\n }\\n ...\\n}\\n\\n
At this point, we have created a Ref
called newRef
. To use this Ref
in our component, we simply pass it as a value to the ref
attribute like this:
class MyComponent extends React.Component {\\n ...\\n render() {\\n return <div ref={this.myRef} />;\\n }\\n}\\n\\n
Here, we’ve attached the Ref
and passed in the newRef
as its value. As a result, we now can update this without changing the component’s state.
In this section, we will discuss attaching refs in React, which is the process of relating a ref to a DOM element for direct DOM manipulation. This step is crucial in order to effectively work with refs and employ their potential in various use cases, such as managing focus, measuring element dimensions, or triggering animations.
\\n\\nWe already covered how to create refs with createRef
, so now we will relate it to a DOM element by using the ref
prop:
<div ref={this.myRef} />\\n\\n
And, finally, when we are ready to access the DOM element later on in the component lifecycle, we can do something like this:
\\nconst divWidth = this.myRef.current.offsetWidth;\\n\\n
Let’s see this behavior with a complete example where we are going to attach a reference to an HTML video
element and use React buttons to play and pause the video using the native HTML5 APIs of the video
element:
import ReactDOM from \\"react-dom\\";\\nimport React, { Component } from \\"react\\";\\n\\nexport default class App extends Component {\\n constructor(props) {\\n super(props);\\n this.myVideo = React.createRef();\\n }\\n render() {\\n return (\\n <div>\\n <video ref={this.myVideo} width=\\"320\\" height=\\"176\\" controls>\\n <source\\n src=\\"https://res.cloudinary.com/daintu6ky/video/upload/v1573070866/Screen_Recording_2019-11-06_at_4.14.52_PM.mp4\\"\\n type=\\"video/mp4\\"\\n />\\n </video>\\n <div>\\n <button\\n onClick={() => {\\n this.myVideo.current.play();\\n }}\\n >\\n Play\\n </button>\\n <button\\n onClick={() => {\\n this.myVideo.current.pause();\\n }}\\n >\\n Pause\\n </button>\\n </div>\\n </div>\\n );\\n }\\n}\\nconst rootElement = document.getElementById(\\"root\\");\\nReactDOM.render(<App />, rootElement);\\n\\n
Here, we used ref
to pause and play our video player by calling the pause
and play
methods on the video. When the pause or play button is clicked, the function will be called on the video player without a re-render.
Refs cannot be attached to functional components, although we can define refs and attach them to either DOM elements or class components. The bottom line is that functional components do not have instances, so you can’t reference them.
\\nHowever, if you must attach a ref to a functional component, the official React team recommends converting the component to a class, just like you would do when you need lifecycle methods or state.
\\nAside from passing the default ref
attribute, we can also pass functions to set refs. The major advantage of this approach is that you have more control over when refs are set and unset. That is possible because it allows us to determine the state of the ref before certain actions are fired.
Consider this snippet from the documentation page below:
\\nclass CustomTextInput extends React.Component {\\n constructor(props) {\\n super(props);\\n this.textInput = null;\\n this.setTextInputRef = element => {\\n this.textInput = element;\\n };\\n this.focusTextInput = () => {\\n // Focus the text input using the raw DOM API\\n if (this.textInput) this.textInput.focus();\\n };\\n }\\n componentDidMount() {\\n this.focusTextInput();\\n }\\n render() {\\n return (\\n <div>\\n <input\\n type=\\"text\\"\\n ref={this.setTextInputRef}\\n />\\n <input\\n type=\\"button\\"\\n value=\\"Focus the text input\\"\\n onClick={this.focusTextInput}\\n />\\n </div>\\n );\\n }\\n}\\n\\n
Instead of defining the refs
in the constructor
, we set the initial value to null
. The benefit of this approach is that textInput
will not reference a node until the component is loaded (when the element is created).
forwardRef
When a child component needs to reference its parent component’s current node, the parent component needs a way to send down its ref to the child. The technique is called ref forwarding, and it is very useful when building reusable component libraries. Ref forwarding can be achieved using the forwardRef
function.
Let’s take an example of a new library with an InputText
component that will provide a lot of functionality, though, for now, we’ll keep it simple:
const InputText = (props) => (\\n <input {...props} />\\n));\\n\\n
The InputText()
component will tend to be used throughout the application similarly to a regular DOM input, therefore accessing its DOM node may be unavoidable for managing focus, selection, or animations related to it.
In the example below, other components in the application have no access to the DOM input element generated by the InputText()
component and are thus restricting some of the operations we have already foreseen we would need to meet our application requirements, such as controlling the focus of the input programmatically.
Here is when React.forwardRef
enters to obtain a ref
passed as props
, and then forwards it to the DOM input
that it renders, as shown below:
const InputText = React.forwardRef((props, ref) => (\\n <input ref={ref} {...props} />\\n));\\n\\n
Now that our component supports forwardRef
, let’s use it in the context of our application to build a button that will automatically focus the input when it’s clicked. The code looks as follows:
import * as React from \\"react\\";\\nimport ReactDOM from \\"react-dom\\";\\n\\nconst InputText = React.forwardRef((props, ref) => (\\n <input ref={ref} {...props} />\\n));\\n\\nexport default function App() {\\n const ref = React.useRef();\\n\\n function focus() {\\n ref.current.focus();\\n }\\n\\n return (\\n <div className=\\"App\\">\\n <InputText ref={ref} placeholder=\\"my input\\" />\\n <button onClick={focus}>Focus</button>\\n </div>\\n );\\n}\\n\\nconst rootElement = document.getElementById(\\"root\\");\\nReactDOM.render(<App />, rootElement);\\n\\n
In the code above, we defined a ref
in the component that needs the ref
and passed it to the button
component. React passed the ref
through and forwarded it down to <input ref={ref}>
by specifying it as a JSX attribute. When the ref
was attached, ref.current
pointed to the <input>
DOM node.
The second ref
argument in the InputRef
component only existed when you defined a component with a React.forwardRef
call. Regular functional or class components didn’t receive the ref
argument, and ref
was not available in props
. Ref forwarding is not limited to DOM components; you can also forward refs to class component instances.
forwardRef
with class componentsAlthough forwardRef
works best with functional components, it can also be used in a class component. It comes in handy when using it with a library that uses forwardRef
, when wrapping class components in higher-order components, accessing child component DOM nodes, or passing ref
down through components.
Let’s look at instances where we have a class component and want to wrap it in a higher-order component while still being able to pass refs to the original class component. This can be achieved using forwardRef
:
import React, { forwardRef, Component } from \'react\';\\n\\nclass ButtonComponent extends Component {\\n handleClick = () => {\\n console.log(\'Button clicked in ButtonComponent\');\\n };\\n\\n render() {\\n return <button onClick={this.handleClick}>Click Me</button>;\\n }\\n}\\n\\nconst Form = forwardRef((props, ref) => (\\n <ButtonComponent ref={ref} {...props} />\\n));\\n\\nexport default Form;\\n\\n
In the above example, the parent component can access the methods and properties of the ButtonComponent
by passing the ref from the parent to the Form
, which is then forwarded to the ButtonComponent
:
import React, { useRef } from \'react\';\\nimport Form from \'./Form\';\\n\\nconst App = () => {\\n const buttonComponentRef = useRef();\\n\\n const handleButtonClick = () => {\\n if (buttonComponentRef.current) {\\n buttonComponentRef.current.handleClick(); // Call method in ButtonComponent\\n }\\n };\\n\\n return (\\n <div>\\n <Form ref={buttonComponentRef} />\\n <button onClick={handleButtonClick}>click</button>\\n </div>\\n );\\n};\\n\\nexport default App;\\n\\n
You can wrap a class component in another class component and handle ref forwarding manually, but it’s more difficult and less intuitive than using forwardRef
with a functional component.
You should also note that using forwardRef
with class components can be complex in large codebases. It is important to use it carefully and comment on its usage clearly. Also note that debugging may become slightly more difficult when using forwardRef
, as the component name may not appear as expected in React DevTools. You can address this by giving the forwarded component a display name.
useImperativeHandle
with forwardRef
useImperativeHandle
is a React Hook that lets you customize the instance value that is exposed when refs are used. It works well with forwardRef
to expose imperative methods, which allows for more control and functionality.
The useImperativeHandle
Hook can be useful when needed to expose methods or properties such as focus
, toggle
, mount
, onClick
, or custom methods of a component to the parent while keeping the component’s implementation details encapsulated.
useImperativeHandle
is used within a component wrapped with forwardRef
and it takes three arguments:
Let’s see how to use useImperativeHandle
in React and how it offers enhanced component control:
import React, { useImperativeHandle, forwardRef, useRef } from \'react\';\\n\\nconst CustomInput = forwardRef((props, ref) => {\\n const inputRef = useRef();\\n\\n useImperativeHandle(ref, () => ({\\n focus: () => {\\n inputRef.current.focus();\\n },\\n clear: () => {\\n inputRef.current.value = \'\';\\n },\\n }));\\n\\n return <input ref={inputRef} {...props} />;\\n});\\n\\nconst App = () => {\\n const inputRef = useRef();\\n\\n return (\\n <div>\\n <CustomInput ref={inputRef} />\\n <button onClick={() => inputRef.current.focus()}>Focus Input</button>\\n <button onClick={() => inputRef.current.clear()}>Clear Input</button>\\n </div>\\n );\\n};\\n\\nexport default App;\\n\\n
In the example code above, the App
component can focus and clear the input field’s focus state using the exposed methods, providing precise control over the child component
useImperativeHandle
works similarly to how useEffect
works by allowing you to maximize speed by only recreating the imperative methods when certain dependencies change. This keeps the methods exposed to the parent component current and removes the need for pointless re-renders.
Let’s see that in action. We will create a component that exposes methods to toggle a modal’s visibility, which only re-creates the methods when the modal’s state changes:
\\nimport React, { useRef, useImperativeHandle, forwardRef, useState } from \'react\';\\n\\nconst Modal = forwardRef((props, ref) => {\\n const [isVisible, setIsVisible] = useState(false);\\n\\n useImperativeHandle(ref, () => ({\\n open: () => setIsVisible(true),\\n close: () => setIsVisible(false),\\n toggle: () => setIsVisible(prev => !prev),\\n }), [isVisible]);\\n\\n return (\\n <>\\n {isVisible && (\\n <div className=\\"modal\\">\\n <p>Modal Content</p>\\n <button onClick={() => setIsVisible(false)}>Close</button>\\n </div>\\n )\\n }\\n </>\\n );\\n});\\n\\nconst App = () => {\\n const modalRef = useRef();\\n\\n return (\\n <div>\\n <button onClick={() => modalRef.current.open()}>Open Modal</button>\\n <button onClick={() => modalRef.current.close()}>Close Modal</button>\\n <button onClick={() => modalRef.current.toggle()}>Toggle Modal</button>\\n <Modal ref={modalRef} />\\n </div>\\n );\\n};\\n\\nexport default App;\\n\\n
In this example, the imperative methods for managing the modal are only re-created when the isVisible
state changes, making sure the parent component always contains the relevant methods without additional re-renders.
forwardRef
with TypeScriptTypeScript is a JavaScript subset that offers the benefits of static typing, enhanced tooling, and improved maintainability, leading to better and more reliable code in your JavaScript applications. forwardRef
, as part of the React library, provides full support for TypeScript, though to maximize its benefits, the code we write should also be strongly typed.
Say, for example, that you have a functional component that uses forwardRef
to expose the DOM reference to an HTML input element. This functional component also has its own props declared with their types. When using forwardRef
in such an example, we’ll have to strongly type the component to avoid errors and improve code readability. One way to do this is by using generic types in forwardRef
. Here’s an example:
type IInputProps = {\\n label: string;\\n};\\n\\nconst InputText = React.forwardRef<HTMLInputElement, IInputProps>((props, ref) => (\\n <div>\\n <span>{props.label}</span>\\n <input ref={ref} placeholder=\\"your input goes here...\\" />\\n </div>\\n )\\n);\\n\\n
As we can see in the code, it is important to specify the type of the Ref
element and the props
. Another common way to declare the same component is to assign the type directly in the parameters of the callback function, as follows:
type IInputProps = {\\n label: string;\\n};\\n\\nconst InputText = React.forwardRef((props: IInputProps, ref: React.Ref<HTMLInputElement>) => (\\n <div>\\n <span>{props.label}</span>\\n <input ref={ref} placeholder=\\"your input goes here...\\" />\\n </div>\\n )\\n);\\n\\n
In this case, when passing the type for the ref
param, we need to make sure we are wrapping the element
type with React.Ref
. Both ways to declare the component are valid, and there is no argument for one over the other. It’s up to the developer’s style, and in my case, I prefer the first way because I believe it looks cleaner.
Similarly, when working on the parent
component, we need to specify the reference
type, and it needs to match the one used in forwardRef
. To do that, you can use generic types when using useRef
as follows:
const ref = React.useRef<HTMLInputElement>(null);\\n\\n
Failing to do so may trigger errors when trying to use methods and properties from the element, as we can see in the image below:
\\nIn React, refs
are a powerful feature that allows developers to interact with DOM elements and components directly. However, there are certain situations where using refs
may not be the best approach. Here are a few:
React encourages a declarative approach to building UIs, so you should avoid using refs for direct DOM manipulation unless necessary. Use component state and props to handle most UI updates.
\\nFunctional components are often meant to be simple and stateless. If you find yourself using multiple refs in a functional component, consider if it could be split into smaller components or if state management should be lifted to a higher-level component.
\\nRefs should not be used as a replacement for state management or prop passing. Data should primarily flow through component props, and when necessary, state management libraries like Redux or React’s Context API can be used.
\\nIn form elements, use controlled components (by setting the value and handling input changes through state and event handlers) whenever possible. Refs should only be used for uncontrolled components when there is a specific need for direct access to the DOM element.
\\nRefs should not be used to reach into a child component’s internal state or methods. Instead, use callback functions or other state management patterns to communicate between parent and child components.
\\nRemember, refs should generally be used only when necessary. In many cases, React’s inbuilt mechanisms for state and prop management are more appropriate for handling component interaction and updates.
\\nWhen managing refs in React, it’s important to know when to use forwardRef
, useRef
, or other techniques. Each serves different use cases for accessing or passing references:
useRef
— Use this hook when you want to create a mutable ref inside a functional component, typically to directly access a DOM element or keep a value between renders. However, useRef
is local to the component where it’s declared and cannot be passed down through props to children
forwardRef
— Use forwardRef
when you need to pass a ref from a parent component through a child component to a nested DOM element or class instance. It’s essential when the child component itself doesn’t accept refs by default, such as function components wrapped in higher-order components
Other alternatives:
\\nCallback refs — These give you more control by letting you set refs dynamically but can be more verbose
\\nReact Context — Sometimes refs or values can be shared using context for deeply nested components, but this is less direct and typically not used for DOM access
\\nImperative handle with useImperativeHandle
— This works together with forwardRef
to customize the instance value exposed to the parent
Choosing the right approach depends on your component hierarchy and whether you want direct DOM access or need to abstract it.
\\nIn React 19, forwardRef
is deprecated for function components. This reflects a shift in best practices toward alternative patterns for managing refs and component interaction.
While the official replacement is still evolving, some emerging approaches include:
\\nUsing hooks combined with context or state management to avoid direct DOM manipulations
\\nCustom hooks that expose imperative methods, replacing the need to forward refs explicitly
\\nComponent APIs designed to handle interactions declaratively, reducing reliance on refs
\\nStay tuned to React’s official releases and RFCs to adopt the best modern alternatives as React evolves beyond version 18.
\\nRefs in React provide a direct way to access DOM nodes or component instances, unlocking powerful options for building cleaner, more performant, and feature-rich components. But it’s important to remember: direct DOM manipulation is generally discouraged in React. When misused, refs can introduce bugs, complexity, and break React’s declarative model.
\\nUse refs—and especially forwardRef
— only when necessary, such as integrating with third-party libraries or managing focus imperatively. This article focused on how to use forwardRef
in React 18 and earlier, explaining key use cases and practical examples with both function and class components.
Thanks for reading! Keep ref usage thoughtful and intentional.
\\n\\n\\n\\n\\n \\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nIn-browser database sandboxes help frontend developers manage data directly in their web browsers. This ability allows for dynamic user interfaces, offline applications, and quick testing without needing complex backend systems. These lightweight tools, which use APIs like IndexedDB, make it easier to store, search, and sync data. Some sandboxes also offer AI features, like natural language queries and automated data generation, making database tasks simpler for developers who may not have much experience.
\\nThis article discusses how in-browser database sandboxes improve frontend workflows. It highlights key tools such as RxDB, database.build, Neo4j Sandbox, and AskYourDatabase, and looks at their features, use cases, and AI capabilities to help you choose the best tool for your project.
\\nIn-browser database sandboxes are simple tools that let developers store, manage, and access data directly in the browser. They use APIs like IndexedDB or libraries built on them.
\\nThese sandboxes act as complete databases, so developers don’t need to set up a backend right away. They allow developers to create, change, and look up data in real time, all while working in the user’s browser.
\\nKey features of these sandboxes include:
\\nDifferent tools offer a variety of AI features, but the main advantage of these sandboxes is that they can handle data on the user’s device.
\\nHere’s how in-browser databases support key frontend tasks:
\\nModern web applications need user interfaces (UIs) that update automatically as data changes. In-browser databases provide this functionality through reactive data binding.
\\nWhen data in the database changes, the UI updates right away. This means developers don’t have to manually change the interface or manage complex states. For example, in a task management app, when a new task is added to an RxDB collection, the UI updates instantly without needing to communicate with the server.
\\nWhen creating prototypes, developers need realistic data to mimic user interactions. Setting up a server for this can take a long time. In-browser databases allow developers to quickly create mock datasets and run queries directly in the browser. They even help generate sample data or suggest queries, making the prototyping process faster.
\\nProgressive web apps (PWAs) need to work offline to provide a seamless experience. In-browser databases store data locally, so applications remain usable even without an internet connection. This helps sync local changes with a backend once the connection is restored, making them perfect for PWAs.
\\nFor example, a note-taking PWA built with an in-browser database allows users to create and edit notes without being online. When the user gets back online, the app syncs changes with a remote server automatically.
\\nAI tools in web-based databases make data management easier for frontend developers who may not have much database knowledge. AI-driven features let developers ask questions in natural language (e.g., “Show products priced below $50”), create data structures, and improve queries without needing to write complicated code.
\\n\\nThis helps developers learn faster, speeds up their work, and allows them to focus on building user-friendly interfaces instead of struggling with database commands. AI can also help with data visualization by automatically creating charts or graphs from query results.
\\nFrontend developers can use various in-browser database sandboxes, each with its own strengths and AI features. Here are some strong options:
\\nRxDB (Reactive Database) is a NoSQL database that works on the client side and is designed for real-time web applications. It operates entirely in the browser or Node.js, and uses storage options such as IndexedDB, WebSQL, or local storage to keep data locally.
\\nRxDB is a great choice for single-page applications (SPAs) and PWAs. It offers features like reactive data binding, offline capabilities, AI support through external plugins, and smooth syncing with backend databases. It is schema-based, and its reactive design means that any changes in the database automatically update the user interface, making it easier for frontend developers to manage the state of their applications.
\\nSome of RxDB’s strengths include:
\\ndatabase.build is an in-browser SQL-based database tool that helps you create and manage databases quickly. It uses IndexedDB for local storage, allowing frontend developers to make, search, and edit mock data without needing a backend server.
\\ndatabase.build is designed for building and testing data-driven websites, especially for dynamic UIs and PWAs. One of its key features is AI support, which helps simplify creating queries and generating datasets, making it easier for developers who may not have much database experience.
\\nAdditional features include an in-browser SQL environment, mock data generation, rapid prototyping, and a user-friendly interface. Other strengths include:
\\nNeo4j Sandbox is a cloud-based platform where you can try out Neo4j, a top graph database. Unlike some fully in-browser databases like RxDB or database.build, Neo4j Sandbox operates on Neo4j’s cloud infrastructure but allows you to manage and query graph data through your browser.
\\nNeo4j Sandbox is designed for developers creating apps with complex relationships, such as social networks, recommendation systems, or knowledge graphs, and data visualization UIs.
\\nNeo4j Sandbox includes tools like Neo4j Bloom for exploring graphs and generating queries with AI help. It makes working with graph databases easier without needing a local setup. Its strengths include:
\\nAskYourDatabase is a tool that uses AI to help users work with SQL and NoSQL databases using natural language queries. With this tool, you won’t need to write complicated SQL code or use APIs. It can be used as a desktop application or chatbot plugin, like ChatGPT. This lets developers query, visualize, and manage data straight from their browser or local setup.
\\nAskYourDatabase is designed to be easy to use and supports tasks like data analysis, schema design, and mock data insertion. This makes it great for building data-driven UI components, prototyping, and getting quick insights without needing deep database knowledge. While it mainly connects to external databases, AskYourDatabase can also work with in-browser storage like IndexedDB for local prototyping.
\\nSome of this tool’s strengths include:
\\nAs we mentioned in earlier sections, many in-browser database sandboxes offer AI functionality to simplify tasks for frontend developers. Neo4j Sandbox uses AI tools like Neo4j Bloom to recommend Cypher queries and visualize graph data, database.build’s AI functions help create mock databases and suggest queries, and AskYourDatabase allows developers to ask questions in natural language to get the data they need.
\\nBut not all tools use AI in the same way. For example, RxDB emphasizes reactive architecture and working offline, with AI support mostly for external connections.
\\n\\nTo help developers choose the right tool, the table below compares the main features of the tools we explored above:
\\nTool | \\nDatabase type | \\nFrontend benefits | \\nAI features | \\nIn-browser vs. Cloud | \\nAccessibility | \\nEase of use | \\n
---|---|---|---|---|---|---|
RxDB | \\nNoSQL (IndexedDB) | \\nReactive UIs, offline PWAs, sync capabilities | \\nLimited (external AI integration) | \\nIn-browser | \\nOpen source, free | \\nModerate (NoSQL knowledge) | \\n
database.build | \\nSQL | \\nRapid prototyping, mock data, simple queries | \\nAI query suggestions, data generation | \\nIn-browser | \\nFree tier, easy setup | \\nHigh (visual, AI-driven) | \\n
Neo4j Sandbox | \\nGraph | \\nData visualization, complex relationships | \\nAI-powered Cypher query suggestions | \\nCloud-based | \\nFree sandbox, signup needed | \\nModerate (Cypher learning) | \\n
AskYourDatabase | \\nMulti-type | \\nNatural language queries, data-driven components | \\nNatural language to query conversion | \\nIn-browser/Cloud | \\nPaid, beginner-friendly | \\nHigh (NLP-based) | \\n
RxDB and database.build are great for offline storage and syncing. They work well for apps, like note-taking or task management apps, that need to operate without an internet connection.
\\ndatabase.build is good for quick prototyping. It lets you create mock datasets and test UI components without a backend. It’s ideal for dashboards or data-heavy interfaces.
\\nNeo4j Sandbox is best for projects that need graph-based data modeling, such as social networks or recommendation systems, where visualizing connections is important.
\\nAI tools like database.build and AskYourDatabase help developers who may not know much about databases. For example, AskYourDatabase allows you to ask questions in plain language to get data without learning SQL. Likewise, database.build offers AI suggestions to make prototyping easier for beginners.
\\nNeo4j Sandbox provides tools that use AI to help you see data more clearly and suggest queries. Tools like Bloom and its graph data science library work well for creating graph-based user interfaces.
\\nOpen source tools like RxDB are free and customizable, but may need more setup work. database.build has a free tier and is easy to use, making it suitable for beginners. Neo4j Sandbox requires a signup, and AskYourDatabase has costs, so check its pricing for long-term use.
\\nIn-browser database sandboxes are changing how frontend development works by letting developers manage data directly in the browser, reducing the need to rely on backend systems.
\\nTools like RxDB, database.build, Neo4j Sandbox, and AskYourDatabase make it easier for developers to streamline their workflows and speed up development. They help build offline PWAs, create prototypes, and visualize complex data relationships. AI features in these tools simplify database management and make it accessible for developers of all skill levels.
\\nIf you are creating a reactive single-page application, simulating data for a prototype, or displaying a graph-based user interface, in-browser database sandboxes can greatly enhance your frontend projects. Select the right tool for your needs, use AI to simplify tasks, and optimize your client-side data management using what you learned in this article today!
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nuseEffect
for asynchronous tasks?\\n useEffect
run during the component lifecycle?\\n useEffect
Hook executed in React?\\n useEffect
?\\n useState
and useEffect
\\n useState
\\n useEffect
\\n useEffect
inside of custom Hooks\\n useEffect
\\n useEffect
examples\\n \\n useEffect
\\n UseEffect
\\n If you’re serious about mastering React in the Hooks era, getting comfortable with useEffect
is essential. Unlike the old class-based lifecycle methods (componentDidMount
, componentDidUpdate
, etc.), useEffect
lets you handle side effects directly inside functional components — everything from data fetching to DOM manipulation to setting up subscriptions.
At first, it might feel confusing. Why does it run after render? Why does it re-run on prop changes? What’s with the dependency array?
\\nThis guide aims to demystify those questions. I’ll walk you through the core ideas behind useEffect
, share lessons from real-world projects, and explore patterns that will help you avoid common pitfalls. You’ll also find example code throughout — pulled straight from a companion GitHub repo:
Editor’s note: This article was updated in June 2025 to add a “When NOT to use useEffect” guide, add more real-world examples, and provide practical steps to use useEffect
effectively. The previous update in 2023 added a comparison of the useState
and useEffect
Hooks and explored the relationship between the useEffect
Hook and React Server Components.
useEffect
is a powerful React Hook that lets you run side effects — like fetching data or updating the UI — in response to changes in your component. It helps you manage tasks that happen after the component renders. Here’s a basic example that runs code only once when the component mounts:
useEffect(() => { // Code to run once when component loads }, []);\\n
useEffect
hook in React?What are the effects, really? Examples include fetching data, reading from local storage, and registering and deregistering event listeners.
\\nReact’s effects are a completely different animal than the lifecycle methods of class-based components. The abstraction level differs, too. To their credit, lifecycle methods do give components a predictable structure. The code is more explicit in contrast to effects, so developers can directly spot the relevant parts (e.g., componentDidMount
) in terms of performing tasks at particular lifecycle phases (e.g., on component unmount).
As we will see later, the useEffect
Hook fosters the separation of concerns and reduces code duplication. For example, the official React docs show that you can avoid the duplicated code that results from lifecycle methods with one useEffect
statement.
A couple of key points to note before we get started:
\\nuseCallback
)Define the effect clearly — Write focused code for the side effect inside the useEffect
callback
Control dependencies carefully — List only variables that should trigger the effect when changed
\\nAdd cleanup logic when needed — Return a function to clean up subscriptions, timers, or listeners
\\nAvoid unnecessary effects — Don’t use useEffect
for tasks better done during rendering
Keep effects fast and efficient — Avoid heavy work inside effects to maintain performance
\\nuseEffect
for asynchronous tasks?In React, useEffect
is the standard way to run asynchronous tasks after a component mounts — like fetching data, subscribing to updates, or syncing with external systems.
Take this real-world scenario: You’re building a dashboard that shows the logged-in user’s profile details. You don’t want to block rendering while fetching the data, and you only want the fetch to run once — right after the component loads.
\\nThat’s where useEffect
shines:
import { useState, useEffect } from \\"react\\";\\n\\nfunction UserProfile() {\\nconst [profile, setProfile] = useState(null);\\n\\nuseEffect(() => {\\nasync function fetchProfile() {\\ntry {\\nconst res = await fetch(\\"/api/user\\");\\nconst data = await res.json();\\nsetProfile(data);\\n} catch (err) {\\nconsole.error(\\"Failed to load profile\\", err);\\n}\\n}\\n\\nfetchProfile();\\n}, []);\\n\\nif (!profile) return <p>Loading...</p>;\\n\\nreturn (\\n<div>\\n<h2>Welcome, {profile.name}!</h2>\\n<p>Email: {profile.email}</p>\\n</div>\\n);\\n}\\n
useEffect
run during the component lifecycle?This interactive diagram shows the React phases in which certain lifecycle methods (e.g., componentDidMount
) are executed:
In contrast, the next diagram shows how things work in the context of functional components:
\\nThis may sound strange initially, but effects defined with useEffect
are invoked after render. To be more specific, it runs both after the first render and after every update. In contrast to lifecycle methods, effects don’t block the UI because they run asynchronously.
If you are new to React, I would recommend ignoring class-based components and lifecycle methods and, instead, learning how to develop functional components and how to decipher the powerful possibilities of effects. Class-based components are rarely used in more recent React development projects.
\\nIf you are a seasoned React developer and are familiar with class-based components, you have to do some of the same things in your projects today as you did a few years ago when there were no Hooks.
\\nFor example, it is pretty common to “do something” when the component is first rendered. The difference with Hooks here is subtle: you do not do something after the component is mounted; you do something after the component is first presented to the user. Hooks force you to think more from the user’s perspective.
\\nuseEffect
Hook executed in React?This section briefly describes the control flow of effects. The following steps are carried out when executing an effect:
\\nuseEffect
Hook is initially rendered, the code inside the useEffect
block runs after the initial render of the component. This is similar to componentDidMount
class componentsuseEffect
Hook. This array contains variables or values that the effect depends on. Any change in these variables will re-render the component. If no dependency array is given, the effect will run on every renderIf one or more useEffect
declarations exist for the component, React checks each useEffect
to determine whether it fulfills the conditions to execute the implementation (the body of the callback function provided as the first argument). In this case, “conditions” mean one or more dependencies have changed since the last render cycle.
useEffect
?The signature of the useEffect
Hook looks like this:
useEffect(\\n () => {\\n // execute side effect\\n },\\n // optional dependency array\\n [\\n // 0 or more entries\\n ] \\n)\\n\\n
Because the second argument is optional, the following execution is perfectly fine:
\\nuseEffect(() => { \\n // execute side effect\\n})\\n\\n
Let’s take a look at an example. The user can change the document title with an input field:
\\nimport React, { useState, useRef, useEffect } from \\"react\\";\\nfunction EffectsDemoNoDependency() {\\n const [title, setTitle] = useState(\\"default title\\");\\n const titleRef = useRef();\\n useEffect(() => {\\n console.log(\\"useEffect\\");\\n document.title = title;\\n });\\n const handleClick = () => setTitle(titleRef.current.value);\\n console.log(\\"render\\");\\n return (\\n <div>\\n <input ref={titleRef} />\\n <button onClick={handleClick}>change title</button>\\n </div>\\n );\\n}\\n\\n
The useEffect
statement is only defined with a single, mandatory argument to implement the actual effect to execute. In our case, we use the state variable representing the title and assign its value to document.title
.
Because we skipped the second argument, this useEffect
is called after every render. Because we implemented an uncontrolled input field with the help of the useRef
Hook, handleClick
is only invoked after the user clicks on the button. This causes a re-render because setTitle
performs a state change.
After every render cycle, useEffect
is executed again. To demonstrate this, I added two console.log
statements:
The first two log outputs are due to the initial rendering after the component was mounted. Let’s add another state variable to the example to toggle a dark mode with the help of a checkbox:
\\nfunction EffectsDemoTwoStates() {\\n const [title, setTitle] = useState(\\"default title\\");\\n const titleRef = useRef();\\n const [darkMode, setDarkMode] = useState(false);\\n useEffect(() => {\\n console.log(\\"useEffect\\");\\n document.title = title;\\n });\\n console.log(\\"render\\");\\n const handleClick = () => setTitle(titleRef.current.value);\\n const handleCheckboxChange = () => setDarkMode((prev) => !prev);\\n return (\\n <div className={darkMode ? \\"dark-mode\\" : \\"\\"}>\\n <label htmlFor=\\"darkMode\\">dark mode</label>\\n <input\\n name=\\"darkMode\\"\\n type=\\"checkbox\\"\\n checked={darkMode}\\n onChange={handleCheckboxChange}\\n />\\n <input ref={titleRef} />\\n <button onClick={handleClick}>change title</button>\\n </div>\\n );\\n}\\n\\n
However, this example leads to unnecessary effects when you toggle the darkMode
state variable:
Of course, it’s not a big deal in this example, but you can imagine more problematic use cases that cause bugs or, at least, performance issues. Let’s take a look at the following code and try to read the initial title from local storage, if available, in an additional useEffect
block:
function EffectsDemoInfiniteLoop() {\\n const [title, setTitle] = useState(\\"default title\\");\\n const titleRef = useRef();\\n useEffect(() => {\\n console.log(\\"useEffect title\\");\\n document.title = title;\\n });\\n useEffect(() => {\\n console.log(\\"useEffect local storage\\");\\n const persistedTitle = localStorage.getItem(\\"title\\");\\n setTitle(persistedTitle || []);\\n });\\n console.log(\\"render\\");\\n const handleClick = () => setTitle(titleRef.current.value);\\n return (\\n <div>\\n <input ref={titleRef} />\\n <button onClick={handleClick}>change title</button>\\n </div>\\n );\\n}\\n\\n
As you can see, we have an infinite loop of effects because every state changes with setTitle
triggers another effect, which updates the state again:
The useEffect
Hook’s second argument, known as the dependency array, serves the purpose of indicating the variables upon which the effect relies. This brings us to an important question: What items should be included in the dependency array?
According to the React docs, you must include all values from the component scope that change their values between re-renders. What does this mean, exactly?
\\nAll external values referenced inside of the useEffect
callback function, such as props, state variables, or context variables, are dependencies of the effect. Ref containers (i.e., what you directly get from useRef()
and not the current
property) are also valid dependencies. Even local variables, which are derived from the aforementioned values, have to be listed in the dependency array.
Let’s go back to our previous example with two states (title and dark mode). Why do we have the problem of unnecessary effects?
\\nAgain, if you do not provide a dependency array, every scheduled useEffect
is executed. This means that after every render cycle, every effect defined in the corresponding component is executed one after the other based on the positioning in the source code.
So the order of your effect definitions matters. In our case, our single useEffect
statement is executed whenever one of the state variables changes. You have the ability to opt out of this behavior. This is managed with dependencies you provide as array entries. In these cases, React only executes the useEffect
statement if at least one of the provided dependencies has changed since the previous run.
Let’s get back to our example where we want to skip unnecessary effects after an intended re-render. We just have to add an array with title
as a dependency. With that, the effect is only executed when the values between render cycles differ:
useEffect(() => {\\n console.log(\\"useEffect\\");\\n document.title = title;\\n }, [title]);\\n\\n
Here’s the complete code snippet:
\\nfunction EffectsDemoTwoStatesWithDependeny() {\\n const [title, setTitle] = useState(\\"default title\\");\\n const titleRef = useRef();\\n const [darkMode, setDarkMode] = useState(false);\\n useEffect(() => {\\n console.log(\\"useEffect\\");\\n document.title = title;\\n }, [title]);\\n console.log(\\"render\\");\\n const handleClick = () => setTitle(titleRef.current.value);\\n const handleCheckboxChange = () => setDarkMode((prev) => !prev);\\n return (\\n <div className={darkMode ? \\"view dark-mode\\" : \\"view\\"}>\\n <label htmlFor=\\"darkMode\\">dark mode</label>\\n <input\\n name=\\"darkMode\\"\\n type=\\"checkbox\\"\\n checked={darkMode}\\n onChange={handleCheckboxChange}\\n />\\n <input ref={titleRef} />\\n <button onClick={handleClick}>change title</button>\\n </div>\\n );\\n}\\n\\n
As you can see in the recording, effects are only invoked as expected when pressing the button:
\\nIt’s also possible to add an empty dependency array. In this case, effects are only executed once; it is similar to the componentDidMount()
lifecycle method. To demonstrate this, let’s take a look at the previous example with the infinite loop of effects:
function EffectsDemoEffectOnce() {\\n const [title, setTitle] = useState(\\"default title\\");\\n const titleRef = useRef();\\n useEffect(() => {\\n console.log(\\"useEffect title\\");\\n document.title = title;\\n });\\n useEffect(() => {\\n console.log(\\"useEffect local storage\\");\\n const persistedTitle = localStorage.getItem(\\"title\\");\\n setTitle(persistedTitle || []);\\n }, []);\\n console.log(\\"render\\");\\n const handleClick = () => setTitle(titleRef.current.value);\\n return (\\n <div>\\n <input ref={titleRef} />\\n <button onClick={handleClick}>change title</button>\\n </div>\\n );\\n}\\n\\n
We just added an empty array as our second argument. Because of this, the effect is only executed once after the first render and skipped for the following render cycles:
\\nIn principle, the dependency array says, “Execute the effect provided by the first argument after the next render cycle whenever one of the arguments changes.” However, we don’t have any argument, so dependencies will never change in the future.
\\nThat’s why using an empty dependency array makes React invoke an effect only once — after the first render. The second render along with the second useEffect title
is due to the state change invoked by setTitle()
after we read the value from local storage.
The next snippet shows an example demonstrating a problematic issue:
\\nfunction Counter() {\\n const [count, setCount] = useState(0);\\n useEffect(() => {\\n const interval = setInterval(function () {\\n setCount((prev) => prev + 1);\\n }, 1000);\\n }, []);\\n return <p>and the counter counts {count}</p>;\\n}\\nfunction EffectsDemoUnmount() {\\n const [unmount, setUnmount] = useState(false);\\n const renderDemo = () => !unmount && <Counter />;\\n return (\\n <div>\\n <button onClick={() => setUnmount(true)}>Unmount child component</button>\\n {renderDemo()}\\n </div>\\n );\\n}\\n\\n
This code implements a React component representing a counter that increases a number every second. The parent component renders the counter and allows you to destroy the counter by clicking on a button.
\\nTake a look at the recording to see what happens when a user clicks on that button:
\\nThe child component has registered an interval that invokes a function every second. However, the component was destroyed without unregistering the interval. After the component is destroyed, the interval is still active and wants to update the component’s state variable (count
), which no longer exists.
The solution is to unregister the interval right before unmounting. This is possible with a cleanup function. Therefore, you must return a callback function inside the effect’s callback body:
\\nuseEffect(() => {\\n const interval = setInterval(function () {\\n setCount((prev) => prev + 1);\\n }, 1000);\\n // return optional function for cleanup\\n // in this case acts like componentWillUnmount\\n return () => clearInterval(interval);\\n}, []);\\n\\n
I want to emphasize that cleanup functions are not only invoked before destroying the React component. An effect’s cleanup function gets invoked every time right before the execution of the next scheduled effect.
\\nLet’s take a closer look at our example. We used a trick to have an empty dependency array in the first place, so the cleanup function acts like a componentWillUnmount()
lifecycle method. If we do not call setCount
with a callback function that gets the previous value as an argument, we need to come up with the following code, wherein we add count
to the dependencies array:
useEffect(() => {\\n console.log(\\"useEffect\\")\\n const interval = setInterval(function () {\\n setCount(count + 1);\\n }, 1000);\\n // return optional function for cleanup\\n // in this case, this cleanup fn is called every time count changes\\n return () => {\\n console.log(\\"cleanup\\");\\n clearInterval(interval);\\n }\\n}, [count]);\\n\\n
In comparison, the former example executes the cleanup function only once — on the mount — because we directly prevented using the state variable (count):
\\nuseEffect(() => {\\n console.log(\\"useEffect\\")\\n const interval = setInterval(function () {\\n setCount(prev => prev + 1);\\n }, 1000);\\n // return optional function for cleanup\\n // in this case, this cleanup fn is called every time count changes\\n return () => {\\n console.log(\\"cleanup\\");\\n clearInterval(interval);\\n }\\n}, []);\\n\\n
In this context, the latter approach is a small performance optimization because we reduce the number of cleanup function calls.
\\nuseState
and useEffect
Both useState
and useEffect
improve functional components and allow them to do things that classes can and that functional components can’t do without Hooks. To understand the difference between the two better, we need to first understand the purpose of both these Hooks:
useState
The useState
Hook is used to manage state variables within a functional component, akin to how this.state
works in class components. With useState
, you can declare and initialize a state variable, and the Hook provides a function to update its value.
useEffect
We have already delved into useEffect
in detail. In essence, it empowers functional components with lifecycle methods similar to those found in class components. You can employ useEffect
to perform actions such as data fetching, DOM manipulation, or the establishment of subscriptions in response to component lifecycle events.
While both of these hooks serve distinct purposes, they are frequently used in conjunction. For instance, states are often utilized in the dependency arrays of an effect, allowing components to re-render when state changes occur.
\\nThere is a natural correlation between prop changes and the execution of effects because they cause re-renders, and as we already know, effects are scheduled after every render cycle.
\\nConsider the following example. The plan is that the Counter
component’s interval can be configured by a prop with the same name:
function Counter({ interval }) {\\n const [count, setCount] = useState(0);\\n useEffect(() => {\\n const counterInterval = setInterval(function () {\\n setCount((prev) => prev + 1);\\n }, interval);\\n return () => clearInterval(counterInterval);\\n }, []);\\n return <p>and the counter counts {count}</p>;\\n}\\nfunction EffectsDemoProps() {\\n const [interval, setInterval] = useState(1000);\\n return (\\n <div>\\n <input\\n type=\\"text\\"\\n value={interval}\\n onChange={(evt) => setInterval(evt.target.value)}\\n />\\n <Counter interval={interval} />\\n </div>\\n );\\n}\\n\\n
The handy ESLint plugin points out that we are missing something important: because we haven’t added the interval
prop to the dependency array (having instead defined an empty array), the change to the input field in the parent component is without effect.
The initial value of 1000
is used even after we adjust the input field’s value:
Instead, we have to add the prop to the dependency array:
\\nuseEffect(() => {\\n const counterInterval = setInterval(function () {\\n setCount((prev) => prev + 1);\\n }, interval);\\n return () => clearInterval(counterInterval);\\n }, [interval]);\\n\\n
Now things look much better:
\\nuseEffect
inside of custom HooksCustom Hooks are awesome because they lead to various benefits:
\\nThe following example represents a custom Hook for fetching data. We moved the useEffect
code block into a function representing the custom Hook. Note that this is a rather simplified implementation that might not cover all your project’s requirements. You can find more production-ready custom fetch Hooks here:
const useFetch = (url, initialValue) => {\\n const [data, setData] = useState(initialValue);\\n const [loading, setLoading] = useState(true);\\n useEffect(() => {\\n const fetchData = async function () {\\n try {\\n setLoading(true);\\n const response = await axios.get(url);\\n if (response.status === 200) {\\n setData(response.data);\\n }\\n } catch (error) {\\n throw error;\\n } finally {\\n setLoading(false);\\n }\\n };\\n fetchData();\\n }, [url]);\\n return { loading, data };\\n};\\nfunction EffectsDemoCustomHook() {\\n const { loading, data } = useFetch(\\n \\"https://jsonplaceholder.typicode.com/posts/\\"\\n );\\n return (\\n <div className=\\"App\\">\\n {loading && <div className=\\"loader\\" />}\\n {data?.length > 0 &&\\n data.map((blog) => <p key={blog.id}>{blog.title}</p>)}\\n </div>\\n );\\n}\\n\\n
The first statement within our React component, EffectsDemoCustomHook
, uses the custom Hook useFetch
. As you can see, using a custom Hook like this is more semantic than using an effect directly inside the component.
Business logic is nicely abstracted out of the component. We have to use our custom Hook’s nice API that returns the state variables loading
and data
.
The effect inside of the custom Hook is dependent on the scope variable url
that is passed to the Hook as a prop. This is because we have to include it in the dependency array. So even though we don’t foresee the URL changing in this example, it’s still good practice to define it as a dependency. As mentioned above, there is a chance that the value will change at runtime in the future.
If you take a closer look at the last example, we defined the fetchData
function inside the effect because we only use it there. This is a best practice for such a use case. If we define it outside the effect, we need to develop unnecessarily complex code:
const useFetch = (url, initialValue) => {\\n const [data, setData] = useState(initialValue);\\n const [loading, setLoading] = useState(true);\\n const fetchData = useCallback(async () => {\\n try {\\n setLoading(true);\\n const response = await axios.get(url);\\n if (response.status === 200) {\\n setData(response.data);\\n }\\n } catch (error) {\\n throw error;\\n } finally {\\n setLoading(false);\\n }\\n }, [url]);\\n useEffect(() => {\\n fetchData();\\n }, [fetchData]);\\n return { loading, data };\\n};\\n\\n
As you can see, we need to add fetchData
to the dependency array of our effect. In addition, we need to wrap the actual function body of fetchData
with useCallback
with its own dependency (url
) because the function gets recreated on every render.
By the way, if you move function definitions into effects, you produce more readable code because it is directly apparent which scope values the effect uses. The code is even more robust.
\\nFurthermore, if you do not pass dependencies into the component as props or context, the ESLint plugin “sees” all relevant dependencies and can suggest forgotten values to be declared.
\\nuseEffect
If you recall our useEffect
block inside the useFetch
custom Hook, you might ask why we need this extra fetchData
function definition. Can’t we refactor our code like so:
useEffect(async () => {\\n try {\\n setLoading(true);\\n const response = await axios.get(url);\\n if (response.status === 200) {\\n setData(response.data);\\n }\\n } catch (error) {\\n throw error;\\n } finally {\\n setLoading(false);\\n }\\n}, [url]);\\n\\n
I’m glad you asked, but no! The following error will occur:
\\nThe mighty ESLint plugin also warns you about it. The reason is that this code returns a promise, but an effect can only return void or a cleanup function.
\\nuseEffect
examplesIn this section, I’ll show you some handy patterns that might be useful when using the useEffect
Hook.
As we already know, you control the execution of effects mainly with the dependency array. Every time one of the dependencies is changed, the effect is executed. You should design your components to execute effects whenever a state changes, not just once.
\\nSometimes, however, you want to trigger an effect only under specific conditions, such as when a certain event occurs. You can do this with flags that you use within an if
statement inside of your effect.
The useRef
Hook is a good choice if you don’t want to add an extra render (which would be problematic most of the time) when updating the flag. In addition, you don’t have to add the ref to the dependency array.
The following example calls the function trackInfo
from our effect only if the following conditions are met:
After the checkbox is ticked, the tracking function should only be executed after the user clicks on the button once again:
\\nfunction EffectsDemoEffectConditional() {\\n const [count, setCount] = useState(0);\\n const [trackChecked, setTrackChecked] = useState(false);\\n const shouldTrackRef = useRef(false);\\n const infoTrackedRef = useRef(false);\\n const trackInfo = (info) => console.log(info);\\n useEffect(() => {\\n console.log(\\"useEffect\\");\\n if (shouldTrackRef.current && !infoTrackedRef.current) {\\n trackInfo(\\"user found the button component\\");\\n infoTrackedRef.current = true;\\n }\\n }, [count]);\\n console.log(\\"render\\");\\n const handleClick = () => setCount((prev) => prev + 1);\\n const handleCheckboxChange = () => {\\n setTrackChecked((prev) => {\\n shouldTrackRef.current = !prev;\\n return !prev;\\n });\\n };\\n return (\\n <div>\\n <p>\\n <label htmlFor=\\"tracking\\">Declaration of consent for tracking</label>\\n <input\\n name=\\"tracking\\"\\n type=\\"checkbox\\"\\n checked={trackChecked}\\n onChange={handleCheckboxChange}\\n />\\n </p>\\n <p>\\n <button onClick={handleClick}>click me</button>\\n </p>\\n <p>User clicked {count} times</p>\\n </div>\\n );\\n}\\n\\n
In this implementation, we utilized two refs: shouldTrackRef
and infoTrackedRef
. The latter is the “gate” to guarantee that the tracking function is only invoked once after the other conditions are met.
The effect is rerun every time count
changes, i.e., whenever the user clicks on the button. Our if
statement checks the conditions and executes the actual business logic only if it evaluates to true
:
The log message user found the button component
is only printed once after the right conditions are met.
If you need to access some data from the previous render cycle, you can leverage a combination of useEffect
and useRef
:
function EffectsDemoEffectPrevData() {\\n const [count, setCount] = useState(0);\\n const prevCountRef = useRef();\\n useEffect(() => {\\n console.log(\\"useEffect\\", `state ${count}`, `ref ${prevCountRef.current}`);\\n prevCountRef.current = count;\\n }, [count]);\\n const handleClick = () => setCount((prev) => prev + 1);\\n console.log(\\"render\\");\\n return (\\n <div>\\n <p>\\n <button onClick={handleClick}>click me</button>\\n </p>\\n <p>\\n User clicked {count} times; previous value was {prevCountRef.current}\\n </p>\\n </div>\\n );\\n}\\n\\n
We synchronize our effect with the state variable count
so that it is executed after the user clicks on the button. Inside of our effect, we assign the current value of the state variable to the mutable current
property of prevCountRef
. We output both values in the JSX section:
When loading this demo, on initial render, the state variable has the initial value of the useState
call. The ref value is undefined
. It demonstrates once more that effects are run after render. When the user clicks, it works as expected.
useEffect
There are some situations in which you should avoid using useEffect
due to potential performance concerns.
If you need to transform data before rendering, then you don’t need useEffect
. Suppose you are showing a user list and only want to filter the user list based on some criteria. Maybe you only want to show the list of active users:
export const UserList = ({users}: IUserProps) => {\\n\\n // the following part is completely unnecessary.\\n const [filteredUsers , setFilteredUsers] = useState([])\\n useEffect(() => {\\n const activeUsers = users.filter(user => user.active) \\n setFilteredUsers(activeUsers)\\n ,[users])\\n\\n return <div>\\n {filteredUsers.map(user => <div> {user.name} </div>)}\\n </div>\\n}\\n\\n
Here you can just do the filtering and show the users directly, like so:
\\nexport const UserList = ({users}: IUserProps) => {\\n const filteredUsers = users.filter(user => user.active)\\n return <div>\\n {filteredUsers.map(user => <div> {user.name} </div>)}\\n </div>\\n}\\n\\n
This will save you time and improve the performance of your application.
\\nYou don’t need useEffect to handle
user events. Let’s say you want to make a POST request once a user clicks on a form submit button. The following piece of code is inspired from React’s documentation:
function Form() {\\n\\n // Avoid: Event-specific logic inside an Effect\\n const [jsonToSubmit, setJsonToSubmit] = useState(null);\\n\\n useEffect(() => {\\n if (jsonToSubmit !== null) {\\n post(\'/api/register\', jsonToSubmit);\\n }\\n }, [jsonToSubmit]);\\n\\n function handleSubmit(e) {\\n e.preventDefault();\\n setJsonToSubmit({ firstName, lastName });\\n }\\n\\n}\\n\\n
In the above code, you can just make the post request once the button is clicked. But you are cascading the effect, so once the useEffect
is triggered, it doesn’t have the complete context of what happened.
This might cause issues in the future; instead, you can just make the POST request on the handleSubmit
function:
function Form() {\\n\\n function handleSubmit(e) {\\n e.preventDefault();\\n const jsonToSubmit = { firstName, lastName };\\n post(\'/api/register\', jsonToSubmit);\\n }\\n\\n}\\n\\n
This is much cleaner and can help reduce future bugs.
\\nUseEffect
React Server Components (RSC) let you render parts of your UI on the server — before any JavaScript is loaded in the browser. This means users see meaningful content faster, without a blank screen while JS bundles load.
\\nBut here’s the catch. Server Components don’t run in the browser. That means you can’t use Hooks like useState
or useEffect
inside them — because those only run after the component mounts on the client.
So what can you do?
\\nThink of Server Components as data preparers — They fetch and pre-render content on the server
\\nThink of Client Components as behavior handlers — They manage interactivity and effects like event listeners, animations, and local state
\\nuseEffect
with Server ComponentsYou still use useEffect
— just not in Server Components. Instead:
Use Server Components to fetch and format data on the server
\\nPass that data as props to a Client Component
\\nLet the Client Component handle any client-only logic using useEffect
Here’s a basic example:
\\n// ServerComponent.jsx (runs on server)\\nimport ClientComponent from \'./ClientComponent\';\\n\\nexport default async function ServerComponent() {\\nconst data = await fetchDataFromDB(); // runs on server\\nreturn <ClientComponent data={data} />;\\n} \\n\\n\\n
// ClientComponent.jsx (runs in browser)\\n\\"use client\\";\\nimport { useEffect } from \\"react\\";\\n\\nexport default function ClientComponent({ data }) {\\nuseEffect(() => {\\n// This runs in the browser after mount\\nconsole.log(\\"Client-side effect:\\", data);\\n}, [data]);\\n\\nreturn <div>{data.title}</div>;\\n}\\n
This separation improves performance, reduces JS bundle size, and gives you clearer control over where and how effects run.
\\nIf you want to level up your React skills, understanding how useEffect
works under the hood — and how to use it well — is essential.
Before Hooks came along in 2019, side effects lived in lifecycle methods like componentDidMount
or componentDidUpdate
. But useEffect
shifts the mindset: instead of thinking in terms of “when” something happens, you think in terms of reactive dependencies — what changed, and what should happen because of it.
Adopting this mental model helps you:
\\nBuild a deeper understanding of React’s rendering behavior and lifecycle
\\nWrite more composable code with other Hooks like useState
, useRef
, useCallback
, and useContext
Spot unnecessary re-renders and optimize with tools like React.memo
or useMemo
Manage side effects like data fetching, subscriptions, timers, or DOM manipulation cleanly and predictably
\\nuseEffect
isn’t just a tool — it’s a lens into how your React app responds to changes. Mastering it helps you write components that are more efficient, reusable, and easier to reason about.
Would you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nEditor’s note: This article was last updated by Shalitha Suranga on 27 May 2024 to include responsive breakpoint testing, practical examples for creating breakpoints, and to refresh outdated information.
\\nResponsive web design is a CSS-based design technique that ensures webpages render properly across all screen sizes and resolutions while ensuring high usability. Users access the internet with a variety of devices that have different screen sizes, so web designers have to implement a way to display their websites properly on those screens to ensure usability.
\\nIn this article, we’ll look at the evolution of responsive design, from media queries to grid systems, container queries, and, finally, fluid design. We’ll discuss the role of breakpoints in responsive design, reviewing different methods of choosing breakpoints and some best practices.
\\nHTML is fundamentally responsive. If you create a webpage using only HTML and resize the window, the browser will automatically adjust the text to fit the viewport. But your content won’t look good on every screen!
\\nFor example, long lines of text can be difficult to read on a wide monitor. Similarly, if the line length is reduced with CSS by creating columns or adding a margin, the content may look squashed when viewed on a mobile device. You have to intervene to adapt the style to the screen based on the content and layout of your webpage.
\\nBefore the responsive design concept, web designers created two website versions for mobile and desktop screens. This approach increased the design process complexity as designers had to maintain two or more website versions.
\\nThe term “responsive design” was coined by Ethan Marcotte in 2010 and described using fluid grids, fluid images, and media queries to create responsive content. At the time, the recommendation was to use float
for layouts and media queries to query the browser width or height to create layouts for different breakpoints using CSS.
A breakpoint is a point, usually a specific width, at which a webpage’s style is changed to ensure the best possible user experience. For example, most responsive websites use a 768px screen width as a breakpoint to differentiate tablet and desktop screens for rendering different layouts for both screen types. CSS frameworks like Bootstrap rose in popularity because they provided designers with responsive grid systems that use pre-defined breakpoints to implement responsive layouts.
\\nWith modern CSS, less intervention is required to resize or change layouts for different screen sizes. Inbuilt CSS layout implementation methods such as flexbox and grid have responsive capabilities, and other modern methods have been developed to make content responsive without even using responsive layout libraries:
\\nclamp()
function: Allows typography and spacing to be responsive to the viewport widthCSS will continue to evolve and offer more responsive design features, but the media query breakpoints strategy is the foundational and most beginner-friendly approach to making any webpage responsive.
\\nLet’s cover media queries before we dive into breakpoints.
\\nMedia queries are useful when you want to modify the layout or appearance of your site depending on specific system or browser characteristics such as the screen resolution of the device or the browser viewport width/height.
\\n\\nA media query definition is composed of four segments:
\\n@media
at-rule that defines a media queryall
, print
, or screen
. This type is optional; it is assumed to be all
if omitted. Logical type operators, only
and not
, are supported before the media typehover
, prefers-reduced-motion
, and width
and
, or
, and not
) that connect media query features to build up media query expressionsThe common syntax for a CSS media query is as follows:
\\n@media <type> <operator> (feature) <operator> (feature) {\\n /* CSS rules */\\n}\\n\\n
The logical operators not
, and
, only
, and or
can be used to compose a complex media query.
For responsive design, min-width
and max-width
are the most commonly used media features. They help web designers create responsive breakpoints based on specific width ranges of the viewport. For example, the following CSS code will apply styles only if the browser’s viewport width is equal to or less than 80em
:
@media (max-width: 80em) {\\n /* CSS rules */\\n}\\n\\n
You can also use height (height
, min-height
, and max-height
), aspect-ratio
, resolution
, and orientation
in media feature expressions to deal with the viewport’s dimensions and different aspects of the screen.
The Media Queries Level 4 specification includes some syntax improvements to make media features that have a less verbose “range” type, e.g., width
. With this syntax improvement, our previous max-width
example could be written like so:
@media (width <= 80em) {\\n /* CSS rules *\\n}\\n\\n
The above syntax works on all popular browser versions released after 2023. See full browser support details for the media query range syntax here.
\\nAs we defined earlier, a breakpoint is the point at which a webpage’s style is adapted in a particular way to provide the best user experience.
\\nThere are two broad approaches when choosing CSS breakpoints: one is based on devices and the other is based on content. Let’s take a look.
\\nYou can target and produce a different design for specific screen sizes. A design may work across multiple screen sizes, but, the content may be narrower when less space is available.
\\nWith the breadth and variety of devices available, determining breakpoints based on screen sizes is challenging. This approach is really not feasible to maintain:
\\nTo simplify this approach, web designers tend to loosely group devices based on a range of sizes. It’s up to you to choose the groupings and specific breakpoints. The most common way is to group devices based on form factor (e.g., mobile devices, tablets, laptops, etc.):
\\nHere is some data you could use to arrive at this decision:
\\nThere is no strict rule or standard to define responsive breakpoints because there are so many different screen sizes. Creating more device breakpoints offers the best results but it increases web design time and delays product delivery. On the other hand, creating fewer breakpoints boosts responsive design time but generates fewer layout variations affecting usability. So, selecting breakpoints based on your design and team preference is undoubtedly a good idea.
\\n\\nLet’s check several common breakpoints that most websites nowadays use. The following CSS snippet uses four breakpoints with a mobile-first design strategy (the default style is for the smallest screen group):
\\n/* Default: Extra-small devices such as small phones (less than 640px) */\\n\\n/* Small devices such as large phones (640px and up) */\\n@media only screen and (min-width: 640px) {...}\\n\\n/* Medium devices such as tablets (768px and up) */\\n@media only screen and (min-width: 768px) {...}\\n\\n/* Large devices such as laptops (1024px and up) */\\n@media only screen and (min-width: 1024px) {...}\\n\\n/* Largest devices such as desktops (1280px and up) */\\n@media only screen and (min-width: 1280px) {...}\\n\\n
Here is another example CSS snippet that only defines two breakpoints with a desktop-first design strategy (the default style is for the largest screen group):
\\n/* Default: Large devices such as laptops, computers (greater than 1024px) *\\n\\n/* Medium devices such as tablets (1024px or lesser) */\\n@media only screen and (max-width: 1024px) {...}\\n\\n/* Small devices such as phones (768px or lesser) */\\n@media only screen and (max-width: 768px) {...}\\n\\n
Now, let’s create a simple responsive login form that uses the above breakpoints setup:
\\n<!DOCTYPE html>\\n<html lang=\\"en\\">\\n <head>\\n <meta charset=\\"UTF-8\\" />\\n <meta name=\\"viewport\\" content=\\"width=device-width, initial-scale=1.0\\" />\\n <meta http-equiv=\\"X-UA-Compatible\\" content=\\"ie=edge\\" />\\n <title>Responsive design breakpoints example</title>\\n <style>\\n * {\\n margin: 0;\\n padding: 0;\\n box-sizing: border-box;\\n }\\n .form-box {\\n display: flex;\\n justify-content: flex-end;\\n gap: 8px;\\n padding: 8px;\\n background-color: #333;\\n text-align: center;\\n }\\n .form-box input,\\n .form-box button {\\n padding: 8px;\\n margin-right: 4px;\\n font-size: 14px;\\n }\\n .form-box input {\\n outline: none;\\n border: none;\\n }\\n .form-box button {\\n border: none;\\n background-color: #edae39;\\n }\\n\\n @media only screen and (max-width: 1024px) {\\n .form-box input,\\n .form-box button {\\n display: block;\\n width: 100%;\\n font-size: 16px;\\n }\\n }\\n\\n @media only screen and (max-width: 768px) {\\n .form-box {\\n flex-direction: column;\\n }\\n .form-box input,\\n .form-box button {\\n display: block;\\n width: 100%;\\n font-size: 20px;\\n }\\n }\\n </style>\\n </head>\\n\\n <body>\\n <div class=\\"form-box\\">\\n <input type=\\"text\\" value=\\"Username\\" />\\n <input type=\\"password\\" value=\\"Password\\" />\\n <button>Login</button>\\n </div>\\n </body>\\n</html>\\n\\n
The above code snippet renders a responsive login section for desktop, tablet, and mobile screens, as shown in this preview:
\\nThis next approach is based on changing the design at the point where the content starts to break in some way. If the line lengths become too long, or if a section becomes too squashed, that’s where you need to consider changing the style. In other words, that’s the point where you want to use a media query or container query to change the design.
\\nThe responsive mode in browser developer tools (Responsive Design Mode in Firefox DevTools and Device Mode in Chrome DevTools) is very useful for working out where your breakpoints should go. You can easily make the viewport smaller or larger to see where the content style could be improved.
\\nRemove all media query segments of the sample HTML code that I used to demonstrate device breakpoints. Open it in your web browser, open developer tools, and activate the responsive design mode. Next, start increasing/decreasing the page width to identify possible breakpoints, as shown in the following preview:
\\nHere, the login form is not correctly getting rendered when the width is less than 486px, so we can create a breakpoint with a 458px pixel value (because we use max-width
) as follows to improve responsiveness:
@media only screen and (max-width: 485px) {\\n .form-box {\\n flex-direction: column;\\n }\\n .form-box input,\\n .form-box button {\\n display: block;\\n width: 100%;\\n }\\n}\\n\\n
The above custom breakpoint makes the login form look better on small screen sizes, as demonstrated in the following preview:
\\nYou can also use CSS container queries to implement these responsive features.
\\nI wouldn’t say that there is one path to follow here. However, I recommend that you do not constrain yourself by thinking only in terms of particular devices. Instead, focus more on utilizing the space available to your content.
\\nGenerally, we will use media queries less as time goes on, although we’re likely to still use media queries for components that are tied to the viewport width, like the website’s main navigation and footer. In other cases, you can design content to be fluid or adapt to the container size through container queries.
\\nThere can be value in having a set of breakpoints. Whether you take a set from the first approach or come up with the breakpoints organically through testing the interface is up to you. I would say that it is easier to debug layout issues when you have a set of breakpoints, rather than having many ad-hoc breakpoints.
\\nHowever, having a set of six breakpoints does not mean you should use them all to adjust a layout or style! Aim to minimize intervention — look for opportunities for the content to do the work for you!
\\nWe can design websites by wrapping styles with device or custom breakpoints, as we discussed in previous practical examples. Also, we can create our own column-based layout system (also known as a responsive grid system) by developing several utility classes. However, most modern web designers now prefer using CSS frameworks and skip writing CSS breakpoints themselves.
\\nEvery CSS framework typically offers a row-column-based layout by offering two pre-developed CSS class types:
\\nrow
column-mobile-1
(1/12 of row width only on mobile screens), column-desktop-6
(6/12 of row width on desktop screens)For example, the following HTML snippet renders a 25%-width column on desktop screens, a 50%-width column on tablet screens, and a 100%-width column on mobile screens:
\\n<div class=\\"row\\">\\n <div class=\\"column-desktop-3 column-tablet-2 column-mobile-12\\"></div>\\n</div>\\n\\n
Most CSS frameworks follow this 12-column responsive layout with device breakpoints with various CSS class names. i.e., Bootstrap column classes are like col-md-12
, col-sm-4
, col-lg-3
, etc.
According to the State of CSS 2023 survey, the most popular CSS frameworks (ordered in terms of usage) are:
\\nLet’s look at the responsive device breakpoints of these popular CSS frameworks.
\\nYou can see the default breakpoints for Bootstrap below:
\\nBreakpoint identifier | \\nMedia query | \\nMinimum width breakpoint | \\n
---|---|---|
None (default/smallest) | \\nN/A | \\n< 576px | \\n
sm | \\n@media (min-width: 576px) | \\n≥ 576px | \\n
md | \\n@media (min-width: 768px) | \\n≥ 768px | \\n
lg | \\n@media (min-width: 992px) | \\n≥ 992px | \\n
xl | \\n@media (min-width: 1200px) | \\n≥ 1200px | \\n
xxl | \\n@media (min-width: 1400px) | \\n≥ 1400px | \\n
Tailwind has five default breakpoints that are inspired by common device resolutions:
\\nBreakpoint identifier | \\nMedia query | \\nMinimum width breakpoint | \\n
---|---|---|
None (default/smallest) | \\nN/A | \\n< 640px | \\n
sm | \\n@media (min-width: 640px) | \\n≥ 640px | \\n
md | \\n@media (min-width: 768px) | \\n≥ 768px | \\n
lg | \\n@media (min-width: 1024px) | \\n≥ 1024px | \\n
xl | \\n@media (min-width: 1280px) | \\n≥ 1280px | \\n
2xl | \\n@media (min-width: 1536px) | \\n≥ 1536px | \\n
Here’s a summary of breakpoints for some of the other popular CSS frameworks:
\\nCSS framework | \\nBreakpoints | \\n
---|---|
Materialize CSS | \\n<600px, ≥600px, ≥992px, and ≥1200px | \\n
Foundation | \\n<640px, ≥640px, ≥1024px | \\n
Ant Design | \\n<576px, ≥576px, ≥768px, ≥992px, ≥1200px, and ≥1600px | \\n
Bulma | \\n<769px, ≥769px, ≥1024px, ≥1216px, and ≥1408px | \\n
Pure CSS | \\n<568px, ≥568px, ≥768px, ≥1024px, ≥1280px, ≥1920px, ≥2560px, and ≥3840px | \\n
Semantic UI | \\n<768px, ≥768px, ≥992px, ≥1400px, ≥1920px | \\n
UIKit | \\n<480px, ≥480px, ≥768px, ≥960px, ≥1200px | \\n
Open Props | \\n<240px, ≥240px, ≥360px, ≥480px, ≥768px, ≥1024px, ≥1440px, and ≥1920px | \\n
A few CSS frameworks offer breakpoints for very small devices like smartwatches and several frameworks motivate designers to implement dynamic layouts for huge screens by offering high breakpoint width values. Overall, every framework generally motivates designers to implement different layouts for mobiles, tablets, and desktop screens.
\\nHere are some important best practices to keep in mind while creating responsive breakpoints:
\\nIn the past, web designers typically tested responsive breakpoints just by resizing the browser window. Now, every popular browser offers an inbuilt responsive testing mode, and cloud-based website testing tools let developers use real devices for testing responsive websites. Moreover, designers nowadays can use simulators/emulators if they target specific devices. Most designers use the built-in browser responsive mode for testing responsive breakpoints.
\\nChrome DevTools offers the device mode feature that simulates device resolution, device orientation, and user agent string with various pre-defined device profiles. It also offers a way to simulate slow devices by throttling CPU and network speeds. Let’s check how to test responsive breakpoints with Chrome’s design mode.
\\nFirst, open a webpage that has responsive breakpoints. For this demonstration, I’ll use the sample login form we used previously to demonstrate device breakpoints. Next, open the Chrome device mode as follows:
\\nMake sure the dimensions select box has the responsive option selected. Enter each breakpoint value and check whether the layout renders as expected by also increasing and decreasing the screen width as follows:
\\nYou can also do this testing by resizing the responsive screen container width, using inbuilt breakpoint templates, or selecting a device profile, as shown in the following preview:
\\nFirefox offers Responsive Design Mode to test CSS responsive breakpoints. Some cloud apps like BrowserStack offer web-based responsive testing with real devices.
\\nSome emerging techniques allow elements to scale proportionally and fluidly without using breakpoints. Sometimes this is referred to as fluid design.
\\nMany fluid design techniques use mathematical functions available in CSS, such as clamp()
, min()
, and max()
, along with dynamic units based on the viewport, such as vh
and vw
, to create expressions that will scale elements. If you would like to learn more about this, here’s an article on flexible layouts without media queries.
One systematic approach to fluid design is Utopia. Utopia advocates for designers and developers to share a systematic approach to fluidity in responsive design. Instead of designing for any particular number of arbitrary breakpoints, you design a system within which elements scale proportionally and fluidly. This can help you to:
\\nUtopia is like a fancy calculator that will spit out some CSS. Just input some dimensions and a preferred scale to determine the range of values.
\\nFor example, this is how the fluid space calculator looks:
\\nIf you use clamp()
in Utopia’s calculator, it will generate the following CSS snippet:
/* @link https://utopia.fyi/space/calculator?c=320,18,1.2,1240,20,1.25,5,2,&s=0.75|0.5,1.5|2|3|4|6,s-l&g=s,l,xl,12 */\\n\\n:root {\\n --space-2xs: clamp(0.5625rem, 0.5408rem + 0.1087vi, 0.625rem);\\n --space-xs: clamp(0.875rem, 0.8533rem + 0.1087vi, 0.9375rem);\\n --space-s: clamp(1.125rem, 1.0815rem + 0.2174vi, 1.25rem);\\n --space-m: clamp(1.6875rem, 1.6223rem + 0.3261vi, 1.875rem);\\n --space-l: clamp(2.25rem, 2.163rem + 0.4348vi, 2.5rem);\\n --space-xl: clamp(3.375rem, 3.2446rem + 0.6522vi, 3.75rem);\\n --space-2xl: clamp(4.5rem, 4.3261rem + 0.8696vi, 5rem);\\n --space-3xl: clamp(6.75rem, 6.4891rem + 1.3043vi, 7.5rem);\\n\\n /* One-up pairs */\\n --space-2xs-xs: clamp(0.5625rem, 0.4321rem + 0.6522vi, 0.9375rem);\\n --space-xs-s: clamp(0.875rem, 0.7446rem + 0.6522vi, 1.25rem);\\n --space-s-m: clamp(1.125rem, 0.8641rem + 1.3043vi, 1.875rem);\\n --space-m-l: clamp(1.6875rem, 1.4049rem + 1.413vi, 2.5rem);\\n --space-l-xl: clamp(2.25rem, 1.7283rem + 2.6087vi, 3.75rem);\\n --space-xl-2xl: clamp(3.375rem, 2.8098rem + 2.8261vi, 5rem);\\n --space-2xl-3xl: clamp(4.5rem, 3.4565rem + 5.2174vi, 7.5rem);\\n\\n /* Custom pairs */\\n --space-s-l: clamp(1.125rem, 0.6467rem + 2.3913vi, 2.5rem);\\n}\\n\\n
No media query is required here. You can use these CSS variables in your padding and margins to create proportional spacing between elements throughout your website. Look at the following sample HTML code snippet:
\\n<p style=\\"padding: var(--space-s-m); background: #aaa\\">\\n Lorem ipsum dolor sit amet, consectetur adipiscing elit. Praesent sit amet nisl elementum,\\n consequat ipsum faucibus, lobortis nibh. Nunc tempus, tellus vitae blandit viverra,\\n est ipsum dapibus augue, vel euismod diam diam ut urna.\\n</p>\\n\\n
The above code snippet renders a dynamic padding for the paragraph without using media queries, as shown in the following preview:
\\nYou can achieve fluidity with typography, spacing, and grid-based layouts. However, this may not be enough to make a completely responsive website and may look complex for some web designers. Read more about fluid design principles from the Utopia design concept documentation.
\\nResponsive design is challenging, but it’s getting easier. Nowadays, choosing breakpoints is less fraught; there’s a wider acceptance that we are not trying to create a pixel-perfect rendering of a website across many screen sizes.
\\nCSS has evolved a lot, and it is now possible to create fluid designs that adapt to the available space and require less intervention. It is still important to understand CSS responsive breakpoints, however — you will eventually need them!
\\nNow you can choose breakpoints according to the content and the design task in front of you rather than follow a prescribed path. You have the option of implementing breakpoints according to the viewport (media queries) or according to blocks of elements (container queries). This will simplify the process of creating responsive designs in the long run.
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nThe excitement around large language models (LLMs) has led to many products with LLM-powered features. There are features like auto-complete, summarization, and natural language queries — integrated into otherwise conventional products. But most of these features are simply bolted onto legacy UX without rethinking the product from the ground up.
\\nIn this article, I’ll explore what it means to design a truly LLM-first product — where generative capabilities are not just features but the organizing principle of the user experience. I’ll contrast the status quo with native design approaches and lay out a framework for teams building the next generation of AI-native applications.
\\nNote — I do not advise using a chat-based UI as the default. LLMs are more than just smarter chatbots.
\\nToday’s typical LLM integrations are lightweight, assistive, and safe. They slot into existing workflows without changing the core logic or UX of a product. Examples:
\\nThese are all LLM-powered features — and they’re helpful. But they’re bolted onto existing patterns. They’re ultimately cosmetic. They don’t challenge or rethink how the product works, what it could do, or what the user’s role might become.
\\nEven from a UI perspective, the user perceives their integration as a visual add-on — an additional button with evocative icons beside the traditional buttons of a given web app, or a feeble animated robot waving a hand in the bottom corner of a webpage trying to engage the user in a chat session to provide support.
\\nWhy is this the default? Because it’s low-risk and incremental. But it also treats the LLM as an advanced autocomplete or search engine — assistive, not transformative.
\\nLLM-native products represent a fundamental historical occasion, conceptualizing software that requires more creativity to move beyond simply grafting AI capabilities onto existing paradigms. It means architecting the experience around natural language as the primary interface. Here, users express intent conversationally, and the LLM becomes a core system actor — not a helper, but a decision-maker.
\\nThese systems elevate AI from passive assistant to autonomous actor, capable of independent decision-making and complex task execution without constant human oversight.
\\nAdopting an LLM-first approach triggers profound transformations in product development. In the following, I’ll briefly discuss how this new approach to application design impacts different phases/components of a complex system:
\\nLLM-first systems are inherently uncertain — not just because users are unpredictable, but because the UI itself can invent new flows.
\\nTraditional testing methods don’t account for this dual-sided creativity. It’s not enough to test static elements — you need to test behaviors that evolve dynamically and contextually. In a sense, the LLM becomes a co-designer of the interface, making conventional QA much harder — the users’ experience may be really variable, and the only way to limit the creativity of the the LLM is to cripple its “humanity” by lowering the temperature of the generation, that is the amount of randomness.
\\nFrom a philosophical point of view, this is more than an engineering challenge. It reframes the entire relationship between humans and software — from operator-and-tool to intent-and-agent.
\\n\\nTo start thinking LLM-first, ask:
\\nLet’s take a simple and familiar use case. Employees use an internal expense reimbursement app to submit receipts, track approvals, and manage payouts. This app already exists in many companies in a conventional form, with forms, tables, dashboards, and rule-based workflows.
\\nHere’s how that app might evolve across the Feature → Agent → Platform spectrum:
\\nHere, the UI is structured around traditional input forms and status dashboards. Employees upload a receipt, fill in fields like amount, date, and category, and submit it. A manager reviews it in a queue.
\\nLLM integration here is minimal and assistive. Maybe there’s a “suggest category” button next to the form field or an “explain this rejection” button that summarizes the policy.
\\nThe LLM helps, but only around the edges. The UI stays largely static and rule-driven.
\\nThe app introduces a goal-driven, partially autonomous assistant in the Agent phase.
\\nInstead of filling out a form from scratch, the employee uploads a receipt or pastes an email with travel details, and the LLM extracts relevant fields, reasons through what’s missing, and prepares a submission draft. The agent may ask follow-up questions like “Was this for a client meeting?” to determine category or justification. On the approval side, the manager interface no longer just displays static requests but shows agent-generated summaries, suggested actions, or automated approval paths based on past behavior and thresholds.
\\nThe UI becomes more dynamic and conversational, with the LLM acting as an intermediary between human intent and system rules.
\\nIn the Platform phase, the concept of “submitting an expense” is no longer bound to form-based workflows. Employees can simply say, “I took a client to dinner yesterday,” and the system orchestrates the rest — retrieving the receipt from their email or card provider, matching the client to a CRM record, checking the per diem policy, and initiating the reimbursement.
\\nThe UI is built around intent capture, confirmation flows, and oversight. It’s no longer a dashboard for managing tickets, but a semantic layer for managing financial intent. Human interaction is centered around correction, validation, and training, not routine input.
\\nIn the figure below, I summarize how the UI evolves from the most traditional buttons and textboxes-based UI through the steps described above. The UI parts that are bound to the LLM are in green:
\\nLLMs challenge long-held assumptions in product design. They shift interfaces from rigid workflows to adaptive, language-driven systems.
\\nAs we move beyond assistive features toward agentic and platform-level integrations, the role of the front end — and the product itself — begins to blur and evolve. The best LLM-first products will initially feel strange, even uncomfortable, because they break familiar patterns. But over time, they’ll reveal a new kind of simplicity — one that feels not just powerful but inevitable.
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nGoogle’s new AI model, Gemini 2.5 Pro, is designed for building rich web applications. Its capabilities have helped vault Google to the top of the AI leaderboard for many frontend developers.
\\nGemini 2.5 Pro is Google’s “thinking model,” and it promises strong math and code capabilities. The new update puts it in contention with GPT 4.0 in terms of usefulness.
\\nIn this post, we will cover Google’s latest breakthrough with the Gemini 2.5 model, focusing on its “thinking” capabilities and what they mean for the future of frontend artificial intelligence tools.
\\nGemini 2.5 Pro distinguishes itself through deep reasoning capabilities integrated into its architecture, which is a significant advancement over its predecessors. Unlike the earlier models, where step-by-step thinking might have been achieved through patient prompting, Gemini 2.5 Pro’s design inherently supports this cognitive process.
\\nThis native integration allows Gemini 2.5 Pro to effectively break down and handle more complex problems through its multi-step reasoning:
These steps can be observed in interfaces like Google AI Studio. The model appears to “think out loud,” which leads to solutions across challenging tasks such as complex coding, mathematical problems, and scientific reasoning.
\\nWhile Google doesn’t explicitly publish the way Gemini 2.5 Pro achieves its reasoning, I did a little research, and tried my best to wrap my head around this.
\\nHere’s a quick diagram to explain:
Gemini 2.5 Pro processes its information through a three-part system:
\\nNow that we understand a bit more about how it works, let’s explore why you should use Gemini 2.5 Pro.
\\nThe model has been able to show strong reasoning and coding capabilities across a wide range of tasks. It presently leads the WeDev Arena Leaderboard with a significant gap:
Gemini 2.5 Pro handles vast amounts of information effectively, thanks to its large context window, tested up to around 71,000 tokens. It officially supports up to one million input tokens:
This, in turn, allows it to process an entire codebase, long documents, or even video and audio inputs 👏.
\\nGemini Pro 2.5’s native multimodal capabilities mean it can understand and process text, images, audio, video, and PDFs, including parsing out graphs and charts, all within a single prompt.
\\nOther significant features include a grounding capability that connects responses to Google Search for more up-to-date and factual answers, complete with source links. While Gemini 2.5 Pro itself focuses on text output, it integrates within Google’s ecosystem, which includes models for image (Imagen 3) and video (Veo 2) generation.
\\nSo, how does Google win? Some of its advantages will be due to its access to large amounts of data, advancements in science and machine learning, and the use of powerful hardware, including custom chips.
\\nUnlike many competitors who might specialize in model development (like OpenAI or Anthropic), data collection (like Scale AI), or hardware (like Groq or Samba Nova), Google is the only company that integrates all three:
This integration, particularly between the science and hardware teams, provides a significant strategic advantage. Google’s AI researchers can build models optimized to run efficiently on Google’s own custom chips (Tensor Processing Units, or TPUs).
\\n\\nThis whole collaboration allows optimizations that may not be possible when targeting general-purpose hardware like NVIDIA GPUs. These GPUs have historically dominated AI training and inference due to their parallel processing capabilities. Google isn’t reliant on external chip manufacturers like Nvidia, allowing for more competitive pricing.
\\nGoogle utilizes its own specialized hardware (like TPUs) to make Gemini models run faster, but way cheaper than its competitors. We have seen their Gemini Flash model demonstrate this with impressive speed, at reportedly 25x lower token costs.
\\nThis hardware advantage, combined with Google’s large data resources and self-funded research, allows them to offer competitive AI primarily through cloud services and their improved AI Studio interface.
\\nGemini 2.5 Pro’s advanced reasoning and large context window (1M tokens) could significantly impact various fields. These capabilities can be accessed through multiple platforms (Google AI Studio, Vertex AI, Gemini app/web, or integrated Google products):
Google’s AI Studio provides a web-based platform for experimenting with Google’s AI models.
\\nThe interface above is divided into a navigation panel on the left for selecting tools like Chat, Stream, or Video Gen, and accessing history or starter apps.
\\nThe central area is the main workspace, currently showing a Chat Prompt interface where users can input text, receive AI-generated responses, and use example prompts. The top bar provides access to API keys, documentation, and account settings.
\\nOn the right, a Run settings panel allows users to configure the AI’s behavior. This includes selecting the specific AI model (e.g., “Gemini 2.5 Pro Preview”), adjusting parameters like Temperature to control creativity, and managing Tools, such as structured output, code execution, function calling, and grounding with Google Search. This comprehensive setup enables developers and users to explore AI models directly in their browser.
\\n\\nWith all these nice features, how do we utilize this in our codebase? Let’s check it out.
\\nThis can easily be done by using gitingest to accomplish everything if you wish. You can tell Gemini 2.5 Pro to extract a particular logic or rewrite the entire code base using a different framework. This will particularly come in handy for frontend developers as it bridges the gap of doing something repeatedly when it can be done in one shot.
\\nGemini offers real precision in making 3D games. These results are overwhelming. I did try one out using this prompt:
\\n“Create a dreamy low-poly flight game scene. Cameras should follow behind with dynamic lighting and gentle animations. Add controls to make it faster. This flight game should be controlled by me, and it should be able to skip bricks and buildings, in a single HTML file.”
\\nTo be honest, the game didn’t work out in the first prompt. But with a little effort, I was able to fix it. Check out the game here:
\\nSee the Pen
\\nGemini 2.5 Flight Game by Emmanuel Odioko (@Emmanuel-Odioko)
\\non CodePen.
I also wanted to test Gemini’s performance in creating simple web apps. I gave it a one-sentence prompt:
\\n“In one HTML file, recreate Facebook’s home page on desktop. Look up Facebook to see what it looks like recently.”
\\nHere is the result:
\\nSee the Pen
\\nFacebook Gemini 2.5 Examples by Emmanuel Odioko (@Emmanuel-Odioko)
\\non CodePen.
I did the same with X:
\\n“In one HTML file, recreate the X home page on desktop. Look up X to see what it recently looks like, put in real images everywhere an image is needed, and add a toggle functionality for themes.”
\\nIt had a more difficult time doing this, but we arrived here at last:
\\nSee the Pen
\\nX generated Gemini 2.5 by Emmanuel Odioko (@Emmanuel-Odioko)
\\non CodePen.
Dark theme looked like this:
And light theme:
Not bad for a free tool, right?
\\nI went ahead and tried LinkedIn. Here is the result:
\\nSee the Pen
\\nLinkedIn Generated By Gemini 2.5 by Emmanuel Odioko (@Emmanuel-Odioko)
\\non CodePen.
Something to note: To draw the very best from Gemini 2.5 Pro, be very distinct with your prompt. Explaining what you want in great detail will help you get to the end result quicker.
\\nGemini 2.5 Pro stands tall as of today as the best web development model out there. It’s going head-to-head with other leading companies like OpenAI, Microsoft, Anthropic, and others. Below are the comparison data according to artificialanalysis.ai :
\\nProvider | \\nModel | \\nOutput Speed (Tokens/s) | \\n
---|---|---|
Gemini 2.5 pro | \\n147 | \\n|
OpenAI | \\nGPT-4o | \\n142 | \\n
xAI | \\nGrok 3 | \\n95 | \\n
DeepSeek | \\nR1 | \\n23 | \\n
Provider | \\nModel | \\nMath (GSM8K / MATH) | \\nCoding | \\n
---|---|---|---|
Gemini 2.0 Pro | \\n67 | \\n55 | \\n|
OpenAI | \\nGPT-4o | \\n70 | \\n63 | \\n
Anthropic | \\nClaude 3.5 Sonnet | \\n57 | \\n49 | \\n
xAI | \\nGrok 3 | \\n67 | \\n55 | \\n
DeepSeek | \\nR1 | \\n60 | \\n44 | \\n
Provider | \\nModel | \\nInput Price ($/M) \\\\$ | \\nOutput Price (/M) | \\n
---|---|---|---|
Gemini 2.0 Flash | \\n0.35 | \\n0.35 | \\n|
Gemini 2.0 Pro | \\n1.50 | \\n1.50 | \\n|
OpenAI | \\nGPT-4o | \\n5.00 | \\n15.00 | \\n
Anthropic | \\nClaude 3.5 Sonnet | \\n3.00 | \\n15.00 | \\n
xAI | \\nGrok 3 | \\n2.00 | \\n2.00 | \\n
DeepSeek | \\nR1 | \\n0.30 | \\n0.30 | \\n
Benchmarks can be deceiving, and you should only trust them to a point. When it comes to agentic coding, Claude 3.7 is up there. But we now have Gemini 2.5 as a strong competitor, and yes, it does have an edge as of today.
\\nIts API’s are cheaper, and it has a much larger token context window. Claude will not be able to generate the 2D flight game above in one shot – not even in two, to be honest – because of its low token context.
\\nOne million tokens seems like enough, but the Google team has promised a two-million-token context window, which should be enough for many codebases. In this article, we were able to look at what makes Gemin 2.5 different, its use cases, and how to get the best when prompting. Lastly, we saw its ability to spin up different demo projects in seconds.
\\nHope you found this exploration helpful. Happy coding!
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nCreativity has reached new heights thanks to the rise of artificial intelligence (AI), which makes image generation possible with simple text prompts and descriptions capable of creating amazing works of art. Artists, designers, and developers alike have access to a wealth of AI tools to enhance traditional workflows.
\\nIn this article, we will build a custom AI image generator application that works offline. The web app will be built with React, and the core of the project will be powered by the Hugging Face Diffusers library, a toolkit for diffusion models. Hugging Face offers tools and models to develop and deploy machine learning solutions, focusing on natural language processing (NLP). It also has a large open source community of pre-trained models, libraries, and datasets.
\\nWe’ll pair these tools with Stable Diffusion XL, one of the most advanced and flexible text-to-image generation models currently available.
\\nA significant part of this article will focus on the practical implementation that is achieved by running these kinds of models. Two main approaches will be compared: performing inference locally inside an application environment versus using a managed Hugging Face Inference Endpoint. With this comparison, we’ll see the trade-offs in performance, scalability, complexity, and cost.
\\nModern AI image creation is led by a class of models known as diffusion models. Picture an image going through a process that turns it into static noise, one pixel at a time. Diffusion models do the opposite by starting with noisy randomness, building structure, and then removing the noise until a visually recognizable image appears based on the text prompt. This calculation can be intensive, but it allows anyone to create fantastically accurate and beautiful photos.
\\nDiffusion models can be complex and challenging to work with, but fortunately, the Hugging Face Diffusers library simplifies this process significantly. Diffusers offers pre-trained pipelines that can be used to create images using just a few lines of code, along with precise control over the diffusion process for complex use cases. It protects us from most of the complexity of the process, thus allowing us to spend more time creating our apps.
\\nIn this project, we will use stabilityai/stable-diffusion-xl-base-1.0
. Stable Diffusion XL (SDXL) is a huge leap for text-to-image generation, thanks to its capability of generating higher quality, more realistic, and more visually stunning pictures than models of the same class. It also has a superior capacity for handling prompts and can generate more sophisticated outputs.
You can check out the entire library of text-to-image models in the Text-to-Image page under the filters section.
\\nOur image generator app will have a full-stack structure that uses local inference. This means that the app will run offline on our machine and won’t require internet access or calls to an external API.
\\nThe frontend will be a React web app that runs in the browser, and it will make calls to our Flask backend server, which will be responsible for running the AI inference and logic locally using the Hugging Face Diffusers library.
\\nAI inference is the process of using a trained AI model to make predictions or decisions based on new input data that it has not seen before. In its simplest form, this can mean training a model to learn a new pattern after analyzing some data.
\\nWhen the model can use what it has learned, this is referred to as “AI inference.” For example, a model can be trained to tell whether it’s looking at a picture of a cat or a dog. In more advanced use cases, it can be trained to perform text autocompletion, offer recommendations for a search engine, and even control self-driving cars.
\\nThis is what the design for our AI image generator app will look like:
\\nOur project will use the following technologies:
\\n<img>
tag to display the newly generated picture for the userFor better UX, the React frontend displays a loading indicator that remains as long as the backend is performing its inference, and error messages if anything goes wrong.
\\nYou can clone or download the application from this GitHub repository. The README file contains instructions on how to set up the Flask backend and React frontend, which are straightforward and shouldn’t take very long.
\\nWhen you complete the setup, you need to run the backend and frontend servers in different tabs or windows on the command line with these commands:
\\n# Run the Flask backend\\npython app.py\\n\\n# Run the Next.js frontend\\nnpm run dev\\n\\n
When you call the Diffusers library for the first time, it will automatically download the model files from the Hugging Face Hub. These models will then be stored in a local cache directory on your machine.
\\n\\nOn macOS/Linux, the default location for this cache is in your home directory under ~/.cache/huggingface/hub
. Inside that directory, you will find more subdirectories that are related to the models you have downloaded. For this model, the path might look something like this:
\\n~/.cache/huggingface/hub/models--stabilityai--stable-diffusion-xl-base-1.0/
.
The library will manage this cache automatically, so you don’t need to interact with it directly. If you have downloaded other models using Hugging Face libraries, they will also be stored in this cache.
\\nWhen you generate an image using the application below, you can see what it looks like in the terminal. For reference, an M1 MacBook Pro takes between two and 12 minutes to generate an image. A high-spec computer will likely complete the process in seconds, especially if you have an NVIDIA GPU with CUDA:
\\nOne of the most essential factors when deploying an AI model for inference is where the computation takes place. In our React app example, we had two choices:
\\nLet’s see how both models compare!
\\nWhen a diffusion model is run locally, the computations are performed on the user’s machine or a server, which is directly under your control, and not a third party like a shared hosting provider.
\\nThe pros to this approach include:
\\nThe cons to this process include:
\\nUnlike local inference, Hugging Face Inference Endpoints are managed online on Hugging Face infrastructure. Models can be accessed through an easy-to-use API call.
\\nThe pros to using this approach include:
\\nThe cons to this approach include:
\\nWhen it comes to setup, local inference is highly technical in terms of setting up the hardware and software stack. Hugging Face Endpoints have a much less technically complicated setup and a focus on model selection and setup through their platform.
\\n\\nAs for cost, local inference is pricey to set up in terms of hardware and upfront costs, but it potentially has lower ongoing costs if usage is extremely high and the hardware is performant. Inference Endpoints are cheap upfront but have costs that accumulate depending on use, which can be more stable and manageable for variable usage.
\\nPerformance is greatly dependent on hardware, with inference endpoints having less variable performance (following cold starts) on optimized hardware, the trade-off being that latency arises from network communication.
\\n\\nChoosing between Hugging Face Inference Endpoints and local inference mostly depends on the type of requirements and constraints of your project:
\\nUse local inference if you:
\\nChoose Hugging Face Inference Endpoints if you:
\\nWhen an image is generated on the backend, it needs to be optimally transmitted to the frontend React app. The biggest concern is transferring binary image data over HTTP.
\\nTwo techniques can be used to solve this:
\\nOne of the best aspects of using the Base64 encoding method is that it integrates the image directly into the API’s JSON response as it converts the binary data into a string of text. This can simplify the transfer because the image data is bundled along with other response data, so it can be embedded directly into an image tag as a data URL.
\\nHowever, a big drawback to this approach is the 33% increase in data size due to the encoding process. Very small or multiple images can significantly increase the payload, making network transfer time slower and even impacting frontend speed as it is forced to process much more data.
\\nOn the other hand, the temporary URL method has the backend to save the generated image file and return a short-lived URL pointing to its location. The frontend can then use this URL to retrieve the image independently.
\\nThis approach keeps the initial API response payload small and uses the browser’s enhanced image loading efficiency. Even though it adds backend overhead for handling storage and generating URLS, it more or less offers enhanced performance and scalability with larger images or for generating more images, as the main data channel is not filled with large image content.
\\nChoosing between these methods mainly comes down to anticipating image sizes and balancing backend complexity with frontend responsiveness.
\\nOne of the largest issues with building an AI image generation app, especially when performing inference on a local machine or a self-hosted server, is handling the high computational demands.
\\nFor example, Stable Diffusion XL and similar models need fast, high-capacity GPUs, which can be very costly. They might also require special setup and technical knowledge, which the average non-technical person might not have.
\\nRegardless, with any inference method, good error handling is crucial. This means using user prompt validation to limit inappropriate or unsafe requests and handling potential failures.
\\nIn addition to the basic implementation, performance improvement of the application is a significant requirement when attempting to provide an uninterrupted experience because image generation can be time-consuming. Strategies such as using load screens, queueing requests in the backend, exploring model optimization, or caching techniques can help with any latency problems.
\\nSecurity should also be taken into account, such as storing sensitive API keys securely when using managed endpoints, sanitizing user input to avoid vulnerabilities, and potentially using content moderation on produced content, which are important measures for creating a secure and responsible application.
\\nThe world of generative AI is evolving fast, and the best way to understand what it can and can’t do is by building projects to test the wide variety of tools available.
\\nUse this guide as a reference point for building a React-based AI image generation app. The project we built is good enough to be a basic minimum viable product (MVP). You can experiment with more features using the other text-to-image models on the Hugging Face Hub. There’s no limit to what you can create with all the tools we have at our disposal today!
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\n cursor
property?\\n cursor
property\\n cursor
property\\n cursor
property\\n The CSS cursor
property carries a lot of weight for user experience. It’s like a quiet guide that instantly tells users what can be done on your page — where they can input text, which elements are clickable, draggable, or resizable.
Getting these visual clues right doesn’t just make your site look more polished; it makes it feel more interactive and intuitive. When users don’t have to guess how to interact with your interface, they spend less time confused and more time focused on what really matters.
\\nIn this tutorial, we’ll explore the syntax of the cursor
property, its supported values, and how to use them effectively in real-world examples.
cursor
property?The CSS cursor
property allows developers to control the mouse pointer’s appearance when users hover over specific elements. It’s a subtle but powerful tool that helps communicate functionality before users even click, wait, text, or drag anything.
Here’s what the basic CSS cursor
property syntax looks like:
cursor: pointer;\\n\\n
This basic form uses a predefined keyword value (pointer) that tells the browser which built-in cursor
style to use.
For a full list of cursor
values (including both popular and lesser-known ones), check out this CodePen:
See the Pen
\\nCursor Values Preview by Emmanuel Odioko (@Emmanuel-Odioko)
\\non CodePen.
Different devices handle cursors differently, so it’s important to plan for various input methods:
\\nIn touch devices like your phones and tablets, the cursors are not visible. This means the cursor: pointer
won’t help mobile users identify clickable elements. And so, it is important to pair cursor styles with other visual indicators like button styling that make it obvious this button is clickable:
.button {\\n/* Clear visual styling for mobile */\\nbackground-color: #4f46e5;\\ncolor: white;\\npadding: 12px 24px;\\nborder: none;\\nborder-radius: 8px;\\nfont-size: 16px;\\nfont-weight: 600;\\n\\n/* Mobile-friendly touch targets */\\nmin-height: 44px;\\nmin-width: 44px;\\n\\n/* Visual feedback without cursor */\\ntransition: background-color 0.2s;\\n}\\n\\n.button:active {\\nbackground-color: #3730a3; /* Darker on tap */\\ntransform: scale(0.98); /* Slight press effect */\\n}\\n\\n.button:disabled {\\nbackground-color: #9ca3af;\\ncolor: #6b7280;\\n}\\n\\n/* Only add cursor on devices that support hover */\\n@media (hover: hover) {\\n.button {\\ncursor: pointer;\\n}\\n\\n.button:hover {\\nbackground-color: #4338ca;\\n}\\n}\\n
Hybrid devices like laptops with touchscreens, surface tablets with keyboards, and iPad users with Apple Pencils switch between input modes constantly. CSS media queries can help detect these scenarios:
\\n/* Only apply hover cursors when hover is available */\\n@media (hover: hover) {\\n .button {\\n cursor: pointer;\\n }\\n}\\n\\n/* Adjust for coarse pointers (touch) */\\n@media (pointer: coarse) {\\n .resize-handle {\\n min-width: 44px; /* Larger touch targets */\\n }\\n}\\n\\n
Screen readers and keyboard navigation automatically take off the cursors. Always make sure your interface works with and without cursor feedback. You can do that by testing with keyboard-only navigation.
\\nThis multi-modal approach ensures your cursors enhance the experience without becoming a dependency. Going forward, we will see how to use the cursor
property in a real project.
cursor
propertyWe encounter cursor changes every day while browsing — often without even noticing. For instance, on macOS, when you hover near the edge of a window, the cursor changes to a resize arrow, hinting that the window can be resized. No clicking needed; the intent is clear.
\\nLet’s look at some real-world use cases where cursor styles shape user interactions:
\\nIn the example below, multiple cursor styles work together:
\\nSee the Pen
\\nDraggable & Resizable CSS-Cursor by Emmanuel Odioko (@Emmanuel-Odioko)
\\non CodePen.
Take a look:
\\ncursor: move
; on the header indicates it can be draggedcursor:
e-resize
; s-resize
; and se-resize
; are used on the right edge, bottom edge, and corner, respectively, to suggest horizontal, vertical, and diagonal resizingWhile CSS is used to handle the cursor feedback, JavaScript is employed to manage the actual drag and resize behavior. You can implement this with vanilla JavaScript using mouse events (mousedown
, mousemove
, mouseup
) or use libraries like Interact.js or React DnD for more robust functionality:
// Basic vanilla JS approach\\nelement.addEventListener(\'mousedown\', startDrag);\\ndocument.addEventListener(\'mousemove\', drag);\\ndocument.addEventListener(\'mouseup\', stopDrag);\\n\\n// Or with Interact.js (simpler API)\\ninteract(\'.draggable\').draggable({\\n listeners: { move: dragMoveListener }\\n});\\n\\n
The cursor styles tell users what they can do, while JavaScript actually makes the dragging work. When users see cursor: move
, they know they can drag, then JavaScript handles the movement when they click and drag.
Here’s a CodePen for the pointer cursor:
\\nSee the Pen
\\nInteractive Buttons with Pointer Cursor by Emmanuel Odioko (@Emmanuel-Odioko)
\\non CodePen.
This example uses:
\\ncursor: pointer
; on buttons to indicate clickable elements (this hand-shaped cursor is generally known to indicate clickable interactivity)cursor: not-allowed
; on a disabled button, represented by a circle with a slash, clearly indicating no interaction is possibleIn many frameworks like React (especially with React 19’s async behavior), the useFormStatus
hook is commonly used to manage loading states during form submissions, where cursor: not-allowed
is applied to disable user interaction while the form is being processed.
This feedback is typically paired with other styles like reduced opacity and color changes to reinforce the disabled state.
\\nHere is an example of a loading state CodePen. I’d suggest you interact with it to get the best of this section:
\\nSee the Pen
\\nLoading State with Wait Cursor by Emmanuel Odioko (@Emmanuel-Odioko)
\\non CodePen.
\\nIn this example, the cursor: wait style is applied using an .is-loading class. It shows a spinning cursor to let users know something is in progress.
This visual cue is supported by other elements like a spinner animation and text changes, common in real project loading states.
\\n\\nThis is a CSS cursor text example CodePen demo:
\\nSee the Pen
\\nText Editing with Text Cursor by Emmanuel Odioko (@Emmanuel-Odioko)
\\non CodePen.
Here, cursor: text; is used on both input fields and a custom div. This I-beam cursor means that text in that area is editable.
\\nAlthough most browsers apply this cursor by default to input fields, it’s important to explicitly set it for custom editable areas to maintain usability.
\\nLet’s look at this one here:
\\nSee the Pen
\\nScrollable Gallery with Grab/Grabbing Cursors by Emmanuel Odioko (@Emmanuel-Odioko)
\\non CodePen.
The demo uses two cursor styles:
\\ncursor: grab
; when hovering over the draggable areacursor: grabbing
; when the user actively dragsWhen the user clicks down, JavaScript switches the cursor from grab
to grabbing
, changing from an open hand to a closed fist that indicates active dragging is in progress. This pair of cursor styles forms a natural metaphor for physical interaction, like grabbing and moving a physical object.
Yes! You can create custom CSS cursors. Here’s the basic syntax to use custom images as cursors:
\\ncursor: url(path-to-cursor-image) x y, fallback-keyword;\\n\\n
This would look something like this in a project:
\\ncursor: url(custom-cursor.png) 10 15, auto;\\n\\n
Here’s what each component means:
\\nurl(path-to-cursor-image)
— Path to your custom cursor image filex y
— Optional coordinates that define the exact point that registers clicksfallback-keyword
— A mandatory fallback cursor that displays if the image fails to loadYou can also specify multiple cursor images separated by commas:
\\ncursor: url(first-image.png) 5 5, \\n url(second-image.svg),\\n url(third-image.cur) 0 0,\\n fallback-keyword;\\n\\n
In the code above, the browser tries each cursor in order and falls back to the keyword if none load.
\\nFor developers looking to create custom cursors without starting from scratch, there are several free resources available to help:
\\n.cur
filesMost image editors like GIMP, Photoshop, or even online tools can export .cur
files. Just remember to keep cursors small (32×32 or 16×16 pixels) for best performance and cross-browser compatibility.
cursor
propertyDespite its usefulness, the CSS cursor
property has a few limitations:
cursor: pointer;
on non-clickable elements (or vice versa) could break a user’s expectations, though small, but it could affect the user’s experience, especially fellow developersI recommend you always pair cursor styles with other indicators:
\\naria-disabled=\\"true\\"
alongside cursor: not-allowed
for disabled elementsrole
attributes where appropriate (e.g., role=\\"button\\"
for clickable divs)aria-label
or aria-describedby
to provide context for complex interactionsFor users navigating with keyboards, focus indicators serve the same role that cursor changes do for mouse users. They signal which element is currently active and what can be interacted with:
\\nCSS\\n.interactive-element {\\n cursor: pointer;\\n}\\n\\n.interactive-element:focus-visible {\\n outline: 2px solid #4f46e5;\\n outline-offset: 2px;\\n box-shadow: 0 0 0 4px rgba(79, 70, 229, 0.1);\\n}\\n\\n.disabled-element {\\n cursor: not-allowed;\\n opacity: 0.6;\\n}\\n\\n.disabled-element:focus-visible {\\n outline: 2px solid #ef4444;\\n outline-offset: 2px;\\n}\\n\\n
The:focus-visible
pseudo-class ensures focus rings only appear for keyboard users, not mouse clicks. This creates a smooth interaction system where cursor affordances guide mouse users while focus styles guide keyboard users.
These are all minor limitations, but the cursor
property remains very useful when used thoughtfully. The trick here is to use it as a UI enhancement rather than the main or only measure of functionality.
cursor
propertyThe CSS cursor
property has great overall support in modern browsers. Basic values like pointer
, default
, text
, and wait
have almost global support across all modern browsers. In fact, support goes back to Internet Explorer 4, making this one of the most reliable CSS properties available.
For full compatibility details, check the MDN Docs.
\\nWe’ve looked into how the CSS cursor
property enhances interaction. The cursor
property was one of the first CSS features that excited me when I started learning. It was one of the “aha!” moments that made CSS feel interesting.
I will leave you with this: Using cursor: none
to hide the pointer might work for immersive games, but it’s disorienting and confusing in regular UI contexts.
Cursor spoofing, where developers mimic system cursors or create deceptive cursors for phishing attempts, breaks user trust and can be misused. Perhaps most frustrating is the misleading cursor problem: using cursor: pointer
on non-clickable elements or cursor: grab
on content that can’t actually be dragged. These false signals could frustrate a user.
Whether you’re new to CSS or a seasoned dev, remember that cursor styles should enhance, not confuse. When users can trust your visual cues, they spend less time guessing and more time engaging with what really matters in your application.
\\nIf you’re curious about custom cursors, check out this detailed guide on creating custom CSS cursors. Keep coding — and keep those cursors honest!
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nOver the past few years, AI has taken the world by storm. From chatbots that can plan entire trips, write code, rephrase sentences, and so much more, the possibilities seem endless thanks to powerful Large Language Models (LLMs).
\\nBut LLMs are trained on existing data sources and information, so the models are often limited in their knowledge. Retrieval-Augmented Generation (RAG) aims to solve this problem once and for all.
\\nRAG is a technique that augments an LLM’s response generation by retrieving information from a source, which can be anything from a database to a search engine. When the latest information is fed into the model, it uses its training data and this newly supplied information to generate a response.
\\nThere are currently two main methods to integrate RAG into your application: the Model Context Protocol (MCP) and function calling. In this guide, we will take a deep look at these processes, paying special attention to the less popular function calling, to build an AI scheduling assistant that allows you to review your calendar and book meetings in real time.
\\nLet’s get started!
\\nIf you’ve been on X or LinkedIn lately, you’ve probably seen the term “MCP” being thrown around. MCP is a protocol defined by Anthropic that describes a language or a framework via which we can connect an LLM model with the external world.
\\nMCP goes beyond just augmenting the knowledge base of the model with the latest information. In addition to fetching information, MCP also allows your model to perform certain actions.
\\nThe way MCP is integrated into a model is by creating an MCP server, which implements the MCP protocol and can talk to other clients that also implement the MCP protocol.
\\nThe server acts as a bridge between the client and the external world via the Model Context Protocol. The client can get information about tools, resources, and prompts from the server (during the initial discovery phase of the protocol), based on which further communication can take place. The server can also perform actions based on the information it receives from the client.
\\nThis is how an interaction between an MCP server and client can be visualized:
\\nFunction calling is a less popular, yet equally capable, method of integrating RAG into your application. Function calling is supported by all the major LLMs provided by OpenAI, Google, and Anthropic, and it is built into the way the models are queried via APIs.
\\nWith function calling, whenever we call the /completions
or /responses
APIs, which are the primary ways to interact with an LLM, we supply them with a list of functions or tools. These functions/tools are a list of “capabilities” that are being provided to the model with proper descriptions of their purpose, input parameters, and output.
The model uses this information to decide whether or not a function needs to be invoked. It conveys this decision in the response of the API call by returning the function name and the parameters that need to be passed to the function. It is the client’s responsibility to implement the function and return the result to the model.
\\nThe model can then use this information to generate a response. This process is similar to implementing the MCP protocol, as we also specify tools in that case. But it differs in that we, as the developers, have fine-grained control over when to invoke these tools. There is a higher level of agency than is offered by MCP.
\\n\\nThis is how an interaction between an AI app and the model via function calling can be visualized:
\\nThink of a service like GitHub. If GitHub wants to expose all of its data to an AI model, it can create an MCP server that structurally exposes this data via MCP. Thereafter, any client that implements the MCP protocol can access this data. It’s a great way for big services like GitHub to expose their data to the outside world.
\\nMCP works great in certain situations. In the previous example about GitHub, if a code editor needs to have information about the latest documentation, the code editor can be the MCP client, and the entity whose documentation is being accessed can implement an MCP server. Once this is done, there is a direct “bridge” between the documentation and the code editor. This can be leveraged throughout the whole development process with just the initial effort.
\\nWhile MCP is all the hype these days, and it works well in scenarios like the one we mentioned above, it might not be the best solution for your use case. In the next section, we’ll discuss some points worth considering before you choose to implement MCP for your application.
\\nAs seen in the diagram above, the MCP approach takes the control out of the hands of the client. It is a more authoritative approach where the model is given full access to the external world, and there are no checks and balances to ensure that the model doesn’t do anything unexpected. This can be a problem if the model is given access to sensitive information or is allowed to perform actions that can have a negative impact on the organization (like accidentally deleting data from the database).
\\nOn the other hand, function calling is a more controlled approach where the client has full control over what actions can be performed. This is because the model can only suggest invoking certain functions with particular arguments. The client then decides whether to invoke the function or not, adding a second layer of protection. This is a more transparent approach because the client has full visibility into what actions are being performed by the model, and the client has the end responsibility of supplying the required data to the model.
\\nThe MCP server is a separate entity that needs to be maintained and monitored. It runs as a separate process from your program and will consume additional resources. If your main program runs as a Docker container, you will need to run the MCP server as a separate container. This adds additional responsibility for maintaining the server. This can be a problem if your program is running in a resource-constrained environment or if you are running multiple instances of your program.
\\nOnce you set up an MCP server, it is not necessarily true that the server can communicate with all other programs. The server can only communicate with clients that also implement the MCP protocol. Thus, before implementing your own MCP server, you need to verify that the consumer or the client first supports the MCP protocol.
\\nWith that in mind, let’s look at an example of function calling with the OpenAI API. We will implement a scheduling assistant that can help others book meetings on your calendar by checking your availability in real time.
\\nThe assistant will interact with the user to get the date and time of the meeting in natural language. Then, it will check your calendar to see if you are available at the given time. If you are, it will book the meeting and send a confirmation email to the user. If you are not available, it will ask for an alternative time and repeat the process.
\\nWe’ll use React Router to set up the boilerplate frontend repository, Tailwind CSS to style the application, and the OpenAI API for the LLM.
\\nFor the functionality of checking your calendar, we will use a mock API that stores meetings in a JSON structure. Later, this can be replaced with any calendar API, like Google Calendar or Outlook Calendar. The code for this project is available in this GitHub repository.
\\nTo set up the boilerplate, run the following command:
\\nnpx create-react-router@latest ai-assistant\\n\\n
Then, navigate to the welcome.tsx
file and remove the boilerplate code. We will create a new file called chat.jsx
in the same folder.
To simplify our task of implementing the chat UI, we will use a pre-built chat component from the Shadcn library. The chat UI component can be found on GitHub. For that to work, we will need to first initialize the Shadcn library. We can do that by running the following command:
\\nnpx shadcn-ui@latest init\\n\\n
This will create a new folder, lib
, and a file called utils.ts
inside that folder. This will contain the utils
required for the Shadcn library.
Now, we’ll install the required core components from the Shadcn library that are used in the chat component. We’ll do that by running the following command:
\\nnpx shadcn@latest add avatar button card input\\n\\n
This installs the required components under the components/ui
folder in our codebase. This is the specialty of Shadcn Library. The components are installed as code in our own repository instead of being installed as a package. This allows us to customize the components as per our needs.
After tweaking the imports in the chat.jsx
file, we will have the following code:
import { cn } from \'../lib/utils\';\\nimport {\\n Avatar,\\n AvatarFallback,\\n} from \'../components/ui/avatar\';\\nimport { Button } from \'../components/ui/button\';\\nimport {\\n Card,\\n CardContent,\\n CardFooter,\\n CardHeader,\\n} from \'../components/ui/card\';\\nimport { Input } from \'../components/ui/input\';\\n\\n
And this is how the UI looks:
\\nWe will now integrate the OpenAI API into our application. We first get an API key by registering as a developer on the OpenAI platform. Next, we will install the OpenAI API client by running the following command:
\\nnpm install openai\\n\\n
We create an .env.local
file in the root of our project and add the following line:
VITE_OPENAI_API_KEY=your_api_key\\n\\n
Replace your_api_key
with the API key you got from the OpenAI platform. Then, we import the OpenAI client into our chat.jsx
file:
import { OpenAI } from \'openai\';\\n\\n
Initialize the client:
\\nconst API_KEY = import.meta.env.VITE_OPENAI_API_KEY\\nconst openai = new OpenAI({\\n apiKey: API_KEY,\\n});\\n\\n
We modify the handleSubmit
function to call the OpenAI API and get a response. We will use the GPT 4.1 model for this. The function will look like this:
const updatedMessages = [\\n ...messages,\\n {\\n role: \'user\',\\n content: input,\\n },\\n];\\n\\nconst response = await client.responses.create({\\n model: \\"gpt-4.1\\",\\n input: updatedMessages,\\n});\\n\\nconst assistantMessage = response.output_text;\\n\\nsetMessages((prev) => [\\n ...prev,\\n {\\n role: \'assistant\',\\n content: assistantMessage,\\n },\\n]);\\n\\n
Notice that we append the latest user message to the list of messages and then call the API with the updated list. We also take the output message (the assistant’s response) and append it to the list of messages. This way, the UI gets updated with the latest message:
\\nYou can see that the assistant is replying, asking for a time and date for the meeting. This is because of the system prompt that we have set, which explains the assistant’s role to the model. We will set the system prompt as follows:
\\n{\\n role: \'system\',\\n content: \\"You are a scheduling assistant for Mr. KK. You can help the user in scheduling a 30 minute meeting by first accepting the date for which the meeting is expected. Then, using the tools, try to schedule the meeting during that time and provide confirmation after calling the proper tool.\\",\\n},\\n\\n
We will now define the tools or functions that the assistant can utilize to fulfill the user’s request. These functions will act as the model’s eyes and ears to the outside world. The model will use these functions to check the calendar for availability and book the meeting.
\\nBut before that, we need a function to work with dates and times.
\\nparse_date
functionWe run into our first problem when the user provides a date and time to the model in natural language. Consider the scenario where the user inputs Tomorrow
for the date.
If we do not have a function to parse the date, the model will take Tomorrow
to be the next day from the knowledge cut-off date. This is not what we want. We want the model to understand that Tomorrow
means the next day from the current date. To implement this, we will create a tool definition in the expected JSON format:
{\\n \\"type\\": \\"function\\",\\n \\"name\\": \\"parse_date\\",\\n \\"description\\": \\"Get the date from natural language sentence\\",\\n \\"parameters\\": {\\n \\"type\\": \\"object\\",\\n \\"properties\\": {\\n \\"date\\": {\\n \\"type\\": \\"string\\",\\n \\"description\\": \\"A natural language date like yesterday, tomorrow, last friday, 24th April\\"\\n }\\n },\\n \\"required\\": [\\n \\"date\\"\\n ],\\n \\"additionalProperties\\": false\\n },\\n \\"strict\\": true\\n}\\n\\n
This is the format specified by OpenAI for defining functions that the model can use. Here, we give the function a name, specify its purpose in detail in the description field, and also provide detailed information about the input parameters. We then pass this function inside a tools
array to the API call as follows:
const response = await client.responses.create({\\n model: \\"gpt-4.1\\",\\n input: updatedMessages,\\n tools: tools,\\n});\\n\\n
With that in place, let’s test how this works. We’ll send a message to the assistant asking for a meeting “two weeks from now”:
\\nBecause the assistant is unable to understand the date from that natural language context, in the response, we see that its output is of type function
:
{\\n \\"id\\": \\"fc_6815f785b7188191a7c15279f9e9084102e882e0265f41fb\\",\\n \\"type\\": \\"function_call\\",\\n \\"status\\": \\"completed\\",\\n \\"arguments\\": \\"{\\\\\\"date\\\\\\":\\\\\\"2 weeks from now\\\\\\"}\\",\\n \\"call_id\\": \\"call_awXJ39Zj0pryh9nDcN9ao6X4\\",\\n \\"name\\": \\"parse_date\\"\\n}\\n\\n
This means that the model is passing the context back to the client and asking it to call the parse_date
function with the arguments {\\"date\\":\\"2 weeks from now\\"}
. So we now need to implement the parse_date
function. We will use the chrono-node library for this purpose. Let’s first install the library by running the following command:
npm install chrono-node\\n
Then, we will import it into our chat.jsx
file:
import chrono from \'chrono-node\';\\n\\n
We will then implement the parse_date
function as follows:
const date_args = JSON.parse(firstResponse.arguments);\\nconst parsedDate = chrono.parseDate(date_args.date);\\nconst pDate = new Date(parsedDate);\\nconst formatted = format(pDate, \'EEEE, do MMMM\');\\nsetMessages((prev) => [\\n ...prev,\\n {\\n role: \'assistant\',\\n content: `I see that you want to schedule a meeting for ${formatted}. Is that correct?`,\\n },\\n]);\\n\\n
Here, we are taking the arguments from the model’s response and parsing them using the chrono-node library. We then format the date to a more readable format and add it back to the messages array. That way, when the messages are sent back to the model after the user confirms the date, the model will have the correct date to work with.
\\nAfter the function is invoked, this message gets added to the chat:
\\n{\\n \\"role\\": \\"assistant\\",\\n \\"content\\": \\"I see that you want to schedule a meeting for Saturday, 17th May. Is that correct?\\"\\n}\\n\\n
The user can then confirm the date by replying with yes
or no
. If the user replies yes
, we can move on to the next step of scheduling the meeting. If the user replies no
, we can ask for an alternative date and repeat the process.
schedule_meeting
functionOnce the user confirms the date and we have a programmatic date to work with, we need to invoke a scheduler function that will check the calendar for availability and book the meeting. We will define the function as follows:
\\n{\\n \\"type\\": \\"function\\",\\n \\"name\\": \\"schedule_meeting\\",\\n \\"description\\": \\"Schedule a meeting at the specified date and time\\",\\n \\"parameters\\": {\\n \\"type\\": \\"object\\",\\n \\"properties\\": {\\n \\"date\\": {\\n \\"type\\": \\"string\\",\\n \\"description\\": \\"A date to schedule the meetings for e.g 25 april, tomorrow, next friday etc.\\",\\n },\\n \\"time\\": {\\n \\"type\\": \\"string\\",\\n \\"description\\": \\"A time to schedule the meetings for e.g 9am 2pm\\",\\n }\\n \\"title\\": {\\n \\"type\\": \\"string\\",\\n \\"description\\": \\"A title for the meeting like \'meeting with KK\', \'meeting with client\', \'meeting with team\' etc.\\",\\n }\\n },\\n \\"required\\": [\\n \\"date\\", \\"time\\"\\n ],\\n \\"additionalProperties\\": false\\n },\\n \\"strict\\": true\\n},\\n\\n
Notice that this function definition takes in three parameters: date
, time
, and title
. The model will invoke this function after getting the date, time, and title from the user through its natural language processing capabilities. If the model has the necessary data, it will send the output as a function call. This is how that invocation looks:
{\\n \\"id\\": \\"fc_68161f5c5d6481918d27bbc4bbc6b0d00ea30a77b5168b68\\",\\n \\"type\\": \\"function_call\\",\\n \\"status\\": \\"completed\\",\\n \\"arguments\\": \\"{\\\\\\"date\\\\\\":\\\\\\"tomorrow\\\\\\",\\\\\\"time\\\\\\":\\\\\\"10am\\\\\\",\\\\\\"title\\\\\\":\\\\\\"Meeting with Tim\\\\\\"}\\",\\n \\"call_id\\": \\"call_nJLPsGfrY8tQs5qlgXRxqfVO\\",\\n \\"name\\": \\"schedule_meeting\\"\\n}\\n\\n
As we are successfully able to invoke the function, we will implement it now. We can code the schedule_meeting
function as follows:
const meeting_args = JSON.parse(firstResponse.arguments);\\nconst date = meeting_args.date;\\nconst time = meeting_args.time;\\nconst title = meeting_args.title;\\nconst parsedDateTime = chrono.parseDate(`${date} ${time}`);\\nconst status = await scheduleMeetings(parsedDateTime, title);\\nlet message = \\"\\";\\n\\nif (status === \'success\') {\\n message = \\"scheduled successfully!\\";\\n} else {\\n message = \\"Sorry, I was unable to schedule the meeting. Can you try another date and time.\\";\\n}\\n\\nconst newMessages = [\\n ...updatedMessages,\\n {\\n role: \'assistant\',\\n content: message,\\n },\\n]\\nsetMessages(newMessages);\\n\\n
Notice how we are parsing the date and time from the model’s response and then creating a valid date object from it using the chrono-node library. We then call the scheduleMeetings
function, which will mock the process of checking the calendar and booking the meeting.
Let’s implement the scheduleMeetings
function. We will use a hardcoded JSON object to mock the current meetings in the calendar. The function will look like this:
export function scheduleMeetings(date, title) {\\n\\n const d = new Date(date);\\n const overlap = doesMeetingOverlap(d, 30);\\n\\n if (overlap) {\\n return \'error\';\\n } else {\\n const startTime = new Date(d.getTime());\\n const endTime = new Date(startTime.getTime() + 30 * 60000);\\n existingMeetings.push({ title: title, startTime, endTime });\\n return \'success\';\\n }\\n}\\n\\n
Here, we are checking if the meeting overlaps with any of the existing meetings in the calendar. If it does, we return an error. If it does not, we add the meeting to the list of existing meetings and return success.
\\nOnce this success or error message is returned to the user, the assistant will then send a confirmation message to the user based on the logic in the handler. And that is the whole flow. With that in place, this is what a sample conversation looks like:
\\nNotice how the assistant can pull out the time as 10 am and the title as “Meeting with Tim” based on the message, “Make it 10 am and call it ‘Meeting with Tim\'”. That is the power of LLMs. They can understand the context and extract the necessary information from the user’s message.
\\nNow, let’s try to schedule a meeting for a time when the host is already blocked as per the dummy calendar we’ve set up. As we can see, there is already a meeting scheduled from 11 am to 12 pm. Let’s try to schedule another meeting at 11 am:
\\nThis request fails the first time because our scheduleMeetings
function returns an error. The model adds a relevant chat message based on the error response. The user provides an alternate time. Notice how the model does not ask for the meeting date and title again. It just uses the relevant data from the previous function and calls the scheduleMeetings
function again. This time, the function returns a successful message, and the meeting is scheduled.
And there we go! We have our fully functional scheduling assistant.
\\nIn this tutorial, we implemented a scheduling assistant that can help you book meetings in your calendar by checking your availability in real time. The assistant can understand natural language and extract the necessary information from users’ messages. It is also able to check the calendar for availability and book meetings, functionalities made possible through function calling. The functions will run in the context of our application and don’t need a separate server.
\\nThe next time you find yourself reaching for the MCP server, consider using function calling instead. It might just be a better and easier solution for your use case.
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nEditor’s note: This tutorial was updated in May 2025 to align with current CLI conventions, clarify setup steps, and provide a quickstart guide.
\\nnpm create vite@latest my-react-app -- --template react-ts
cd my-react-app
npm install
npm run dev
React, combined with TypeScript, offers a powerful way to develop scalable and maintainable web applications. TypeScript brings static typing to the world of JavaScript, making it easier to write error-free code. Meanwhile, Vite is a fast and lightweight build tool for modern web development, providing a rapid development experience focused on speed and simplicity.
\\nIn this article, we’ll delve deep into how to harness the combined strengths of React, TypeScript, and Vite to create an efficient web application. We’ll walk through the process of initiating a new project, integrating TypeScript, setting up React, and utilizing Vite’s capabilities to enhance the development experience.
\\nWhether you’re a seasoned web developer or just starting out, I think you’ll find this article valuable. So grab a cup of coffee and let’s get started!
\\nVite offers many unique features that set it apart from other build tools and make it an excellent choice for web development. Let’s take a look at some of Vite’s special features:
\\nTypeScript and Vite are two powerful tools that have gained widespread popularity in the web development community. While TypeScript provides type safety and a strong foundation for building scalable applications, Vite offers a fast and efficient development experience. So, why combine these two technologies? Let’s take a look.
\\nVite offers a unique development experience due to its speed, efficiency, and compatibility with modern JavaScript libraries like React. Here are some specific benefits when using React with Vite:
\\nNow that we understand more about the powerful combination of TypeScript and Vite, let’s dive into the demo portion of this tutorial.
\\nFirst, ensure that you have Node.js ≥v18 installed on your machine, then create a Vite project by running the following command in the terminal:
\\nnpm create vite@latest my-react-app -- --template react-ts\\n\\n
This command will prompt you to choose a name for your project. Feel free to choose any name; then press Enter to continue. For this demonstration, we’ll use the project name my-react-app
.
Next, you’ll be asked to select a framework for your Vite project. Vite provides a variety of frameworks that may be used for an application: React, Vue.js, Lit, Preact, vanilla JavaScript, and Svelte. For this demo, we’ll select React.
\\nLastly, you’ll be prompted to choose a variant for your application. For this demo, we’re building a TypeScript app with Vite, so we’ll select TypeScript.
\\nHere are our selections for the Vite project prompts:
\\nAfter processing the project information we just submitted, Vite will generate the project’s folder structure:
\\n📦my-react-app\\n┣ 📂public\\n┃ ┗ 📜vite.svg\\n┣ 📂src\\n┃ ┣ 📂assets\\n┃ ┃ ┗ 📜react.svg\\n┃ ┣ 📜App.css\\n┃ ┣ 📜App.tsx\\n┃ ┣ 📜index.css\\n┃ ┣ 📜main.tsx\\n┃ ┗ 📜vite-env.d.ts\\n┣ 📜.gitignore\\n┣ 📜index.html\\n┣ 📜package-lock.json\\n┣ 📜package.json\\n┣ 📜tsconfig.json\\n┣ 📜tsconfig.node.json\\n┗ 📜vite.config.ts\\n
Below are the key files from the vite-ts-app
project folder:
index.html
: The main file, typically found in a public directory in a Vite projectmain.tsx
: Where the code for producing the browser output is executed. This file is common for Vite projectsvite.config.json
: The configuration file for any Vite projectWe’ve completed the prompts to create a Vite project. Now, let’s cd into the project folder and use the below commands to run the application:
\\ncd vite-ts-app\\nnpm install\\nnpm run dev\\n\\n
To confirm that the application is running, check the terminal — and you should see the following:
\\nPress the o
key to open the application in your web browser:
With the Vite app up and running in our browser, let’s create a blog application using Vite and React that renders some static blog data from a JSON file.
\\nTo get started, let’s update the code in the App.tsx
file to add a navbar to the application’s UI:
import \'./App.css\'\\nfunction App() {\\n\\n return (\\n <div className=\\"App\\">\\n <div className=\\"navbar\\">\\n <ul>\\n <li>Home</li>\\n <li>Blog</li>\\n </ul>\\n </div>\\n </div>\\n )\\n}\\nexport default App\\n\\n
Next, let’s update the App.css
file to add some new styles to the application:
* {\\n padding: 0px;\\n margin: 0px;\\n box-sizing: border-box;\\n}\\n.navbar {\\n background-color: rgb(50, 47, 47);\\n color: white;\\n padding: 10px;\\n}\\n.navbar ul {\\n display: flex;\\n width: 600px;\\n margin: 0px auto;\\n font-size: 14px;\\n list-style: none;\\n}\\n.navbar ul li {\\n margin: 10px;\\n}\\n\\n
The resulting UI will look like the following:
\\nNext, we’ll need to add data to our blog application. Let’s create a blog.json
file in the project’s root directory and add the following data:
[\\n {\\n \\"id\\": 1,\\n \\"title\\": \\"Building a Todo App with Vue\\",\\n \\"cover\\": \\"https://nextjs.org/static/images/learn/foundations/next-app.png\\",\\n \\"author\\":\\"John Doe\\"\\n },\\n {\\n \\"id\\": 2,\\n \\"title\\": \\"Getting started with TypeScript\\",\\n \\"cover\\": \\"https://nextjs.org/static/images/learn/foundations/components.png\\",\\n \\"author\\":\\"Claman Joe\\"\\n }\\n]\\n\\n
Here we defined some arrays of blog objects, which we’ll render in our Vite app’s UI.
\\nNow, let’s create a components
folder in the src
directory. Then, we’ll create a Blog.tsx
file and add the below snippet:
import blogData from \'../../blog.json\'\\ntype Blog = {\\n id: number,\\n title: string,\\n cover: string,\\n author: string\\n}\\nexport function Blog() {\\n return (\\n <div className=\\"container\\">\\n <div className=\\"blog\\">\\n {blogData.map((blog: Blog) =>\\n <div className=\\"card\\" key={blog.id}>\\n <img src={blog.cover} alt=\\"\\" />\\n <div className=\\"details\\">\\n <h2>{blog.title}</h2>\\n <h4>{blog.author}</h4>\\n </div>\\n </div>\\n )}\\n </div>\\n </div>\\n )\\n}\\n\\n
This code defines a function that returns a container for blog posts and includes a list of blog cards. Each card displays the title, cover image, and blog post author. The code uses a map
function to loop through a blogData
array and create a card
for each item.
Next, let’s update the App.css
file to style the Blog
component:
.App {\\n background: rgb(44, 183, 134);\\n height: 100vh;\\n}\\n.container {\\n width: 600px;\\n margin: 0px auto;\\n}\\n.container .blog {\\n display: flex;\\n padding: 10px;\\n}\\n.container .card {\\n background-color: white;\\n margin: 10px;\\n padding: 10px;\\n border-radius: 4px;\\n width: 50%;\\n font-size: 10px;\\n color: rgb(50, 47, 47);\\n}\\n.container .card img {\\n width: 100%;\\n}\\n\\n
Lastly, let’s update the App.tsx
component to import and render the Blog
component:
import \'./App.css\'\\nimport { Blog} from \'./components/Blog\'\\n\\nfunction App() {\\n\\n return (\\n <div className=\\"App\\">\\n <div className=\\"navbar\\">\\n <ul>\\n <li>Home</li>\\n <li>Blog</li>\\n </ul>\\n </div>\\n <Blog />\\n </div>\\n )\\n}\\nexport default App\\n\\n
And with that, we’ve successfully created a blog application using TypeScript and Vite! If all went well, it should look like the image below:
\\nTo compare the startup time of a Vite app to an app built with an alternative like Create React App (CRA), we’d need to build and test both apps under similar conditions. To demonstrate this, I built the same demo application that we just created in this tutorial, except I used CRA. Then, I used the performance inspection feature in Chrome DevTools to test the start time for each version of the app.
\\nHere’s the performance result for the TypeScript app built with CRA; the startup time was 99ms:
\\nAnd here’s the performance of the TypeScript app built with Vite; the startup time was 42ms:
\\n
\\nIn our test, the TypeScript application built with Vite started 58 percent faster than the TypeScript application built with Create React App.
server.port
in vite.config.ts
--template react-ts
)nvm use 18
if neededIn this article, we discussed the many benefits of combining React, TypeScript, and Vite, demonstrated how to build a simple React-based blog application using TypeScript and Vite, and then compared the performance of our app with that of a TypeScript app built with Create React App.
\\n\\nThe fusion of React, TypeScript, and Vite presents an array of benefits for web developers — from React’s component-based approach and TypeScript’s enhanced type safety to Vite’s rapid development experience. This blend promotes scalable, maintainable code and superior performance.
\\nVite’s focus on speed, efficiency, and simplicity helps deliver high-quality, performant web applications. The combination of TypeScript and Vite affords developers of all levels an excellent choice for building high-quality and performant web applications.
\\nI hope you got value from this tutorial. Happy coding!
\\n\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nEditor’s note: This article was updated in May 2025 to reflect changes in framework popularity, add newer Rust frameworks, and better align with developer sentiment in 2025.
\\nRust is one of the most popular languages for developers because of its open source, fast, reliable, and high-performance features. When building a new API in Rust, it is important to consider the benefits and drawbacks of web frameworks for both frontend and backend development.
\\nIn this article, we will discuss what a web framework is and explore the various web frameworks in the Rust ecosystem for frontend and backend development in no particular order.
\\nLet’s get started.
\\nA web framework is a software tool that supports the development of web applications; a web framework can range from a small codebase for micro apps to a large codebase for enterprise apps and everything in between.
\\nThe most extensive web frameworks provide libraries of support for databases, templating, sessions, migration, and other utilities to speed up the development process. More simplistic frameworks focus more on one area, either the backend or frontend, and sometimes without a lot of features.
\\nWhatever your project needs, web frameworks can provide the web services, web resources, and web APIs that development teams need to help bring their ideas to life. When choosing the appropriate web framework for your project, your development team should consider the relative importance of the following:
\\nRust’s memory safety guarantees security, which is achieved via its ownership model. However, not all Rust web frameworks handle security features, like cross-site scripting (XSS) and cross-site request forgery (CSRF), equally. So, you should look out for how security is handled in the framework.
\\nFramework flexibility often comes down to how much control you need versus how much you want to rely on abstractions and conventions. Depending on your experience, you might want to consider the flexibility of the framework and how it benefits your project.
\\nSmaller projects may benefit from using simpler, higher-level abstractions, while larger projects require scalability and efficient concurrency.
\\nKeeping up-to-date with a framework’s development is important — you don’t want to start using a framework whose last update was five years ago, as it may impact both security and compatibility with the latest Rust features.
\\nClear, well-structured documentation can significantly speed up development, especially when onboarding new developers.
\\nCommunity size and engagement can determine how easy it is to find resources, libraries, and help when issues arise in your project journey — “bugs are part of the job” 🙂.
\\n\\nDepending on your project’s priorities, different web frameworks will help you address your most pressing development requirements. In this article, we will specifically discuss frameworks built with Rust.
\\nWeb frameworks make web development and building desktop applications easier for developers. By standardizing the building process and automating common activities and tasks, web frameworks can save developers time and even promote reusing code to increase efficiency.
\\nIn the following sections, we will review web frameworks in Rust as they pertain to both frontend and backend development.
\\nWebAssembly (Wasm) is a type of coding in low-level languages that can be run in modern web browsers. It supports C/C++, C#, Go, and Rust, with a target compilation for byte code so it can be run on the web with nearly-native performance. Wasm output is run alongside JavaScript and can be published to npm and other packages.
\\nRust uses a tool called wasm-pack
to assemble and package crates that target Wasm. To learn more about Wasm and Rust, check out our guide to getting started with WebAssembly and Rust.
Leptos is a modern full-stack framework for Rust, built for fine-grained reactivity, server-side rendering (SSR), and seamless frontend/backend integration. It’s similar in spirit to React with Next.js but offers Rust’s safety and performance.
\\nLeptos integrates with Axum and provides a full-stack development experience — you can write your UI and backend in the same language and reuse logic between the two. It also supports hydration, island architecture, and async rendering.
\\nTo get started with Leptos, install the CLI and scaffold a project:
\\ncargo install cargo-leptos\\ncargo leptos new my-leptos-app\\ncd my-leptos-app\\ncargo leptos dev\\n\\n
This sets up a ready-to-run project with both server and client logic preconfigured. Leptos is growing quickly and currently has over 18.5k GitHub stars.
\\nYew is one of the most popular Rust frameworks (it currently has 30.5k stars on GitHub) for building modern web applications. Inspired by React, it leverages a component-based architecture and provides support for state management, async, and more.
\\nHere is a simple example of a Hello World
app with Yew:
You can quickly explore how it works by running the following commands (ensure you have Rust installed):
\\ncargo install generate\\ncargo install trunk\\ncargo generate --git https://github.com/yewstack/yew-trunk-minimal-template\\ntrunk serve --open\\n\\n
The above snippet of code will generate a boilerplate code that you can use as a starting template for your Yew app. The reason we installed Trunk is because Yew uses a Trunk bundler to serve HTML for the web.
\\nPerseus is a Rust framework for building reactive web applications. It supports functionalities similar to Next.js but is designed for the Rust ecosystem.
\\nPerseus’ reactive system is powered by the Sycamore reactive library and has native support for server-side rendering (SSR) and static site generation (SSG). It currently has over 2.8k GitHub stars.
\\nHere is an example of how you can write a simple Hello World
application with Perseus:
use perseus::prelude::*;\\nuse sycamore::prelude::*;\\n\\n#[perseus::main(perseus_axum::dflt_server)]\\npub fn main<G: Html>() -> PerseusApp<G> {\\n PerseusApp::new()\\n .template(\\n Template::build(\\"index\\")\\n .view(|cx| {\\n view! { cx,\\n p { \\"Hello World!\\" }\\n }\\n })\\n .build()\\n )\\n}\\n\\n
To get started with Perseus, run the command below to create a sample app and start the server:
\\ncargo install perseus-cli\\nperseus new my-app\\ncd my-app/\\nperseus serve -w\\n\\n
Sauron is a micro frontend framework that was inspired by Elm Architecture. It supports events, state management, and client-side, and server-side web development. One of the easiest ways to experiment with how it works is by using the html2sauron to convert HTML into Sauron source code like so:
\\nSauron has above 2k GitHub stars, which is really impressive for a new framework. It shows that the interest in the framework is growing.
\\nDioxus is a Rust UI library that lets you build reactive cross-platform UI components — it supports web, mobile, and desktop app development. It borrows some of its features from React (including hooks) and uses its own virtual DOM — you can think of it as a hybrid of React with the safety and speed of Rust.
\\nThis is how a component looks like in a Dioxus app:
\\nfn app(cx: Scope) -> Element {\\n let result: &mut u32 = cx.use_hook(|| 0);\\n\\n cx.render(rsx!(\\n div { \\"Hello World\\" }\\n ))\\n}\\n\\n
Dioxus has one of the largest community support with over 20k GitHub stars.
\\nIced is a GUI library for cross-platform development with batteries included. Its architecture is also inspired by the Elm Architecture and offers built-in support for reactive programming, type safety, and speed.
\\nIced is a little bit opinionated; it expects you to write your code using the following structure:
\\nThis is a great way to split user interfaces into different concepts that are easy to reason about and find exactly what you are looking for in the codebase.
\\nThe community for Iced is growing rapidly as well, with over 24k stars on GitHub.
\\nTauri is a Rust-based library that enables you to build lightweight desktop applications by leveraging web technologies like HTML, CSS, and JavaScript for the UI. You can use any frontend framework of your choice that compiles to HTML, CSS, and JavaScript.
\\nUnlike Electron (a JavaScript desktop app development framework), which relies on Chromium and Node.js, Tauri uses the system’s native web view. This makes it possible to have even smaller binary sizes and more efficient resource usage.
\\nYou can use the Tauri framework to develop a full-stack desktop app from the frontend to the backend logic.
\\nTauri probably has the largest community support with over 81k GitHub stars as of the time of this writing.
\\nBackend development is the aspect of web development that is focused on server-side operations. A typical backend framework includes features like database management, session handling, templating, ORM, and database migrations for building and maintaining reliable web apps.
\\nVarious web frameworks in the Rust ecosystem enable backend development, which we’ll discuss in this section.
\\nRocket is one of the most mature Rust web frameworks and now supports async/await on stable Rust. Its strong focus on type safety and developer ergonomics makes it ideal for production-grade web services.
\\nRocket has evolved into a highly usable, performant framework with active development and over 25k GitHub stars.
\\nHere is a simple example of a Rocket server that takes two query parameters and returns a Happy Birthday message:
\\n#[macro_use] extern crate rocket;\\n\\n#[get(\\"/<name>/<age>\\")]\\nfn birthday(name: &str, age: u8) -> String {\\n format!(\\"Yayyy, {}, you are {} years old! Happy Birthday to you.\\", name, age)\\n}\\n\\n#[launch]\\nfn rocket() -> _ {\\n rocket::build().mount(\\"/birthday-message\\", routes![hello])\\n}\\n\\n
Actix Web is a production-grade, high-performance Rust web framework built on the actor model. It’s one of the most mature frameworks in the ecosystem and is known for its blazing speed, extensibility, and feature richness — though it may present a steeper learning curve compared to Axum or Rocket.
\\nIt has a large developer community, with over 23k GitHub stars as of the time of this writing.
\\nHere is a simple Happy Birthday API example using Actix Web to get an overview of what it looks like:
\\nuse actix_web::{get, web, App, HttpServer, Responder};\\n#[get(\\"/birthday-message/{name}/{age}\\")]\\nasync fn birthday(name: web::Path<(String, u8)>) -> impl Responder {\\n format!(\\n \\"Hello, {}, you are {} years old! Happy Birthday!\\",\\n name.0, name.1\\n )\\n}\\n#[actix_web::main]\\nasync fn main() -> std::io::Result<()> {\\n HttpServer::new(|| App::new().service(birthday))\\n .bind(\\"127.0.0.1:8080\\")?\\n .run()\\n .await\\n}\\n\\n
If you are looking to adopt Actix Web for your project, I suggest you read our Actix Web adoption guide for more insights.
\\nAxum is an async-first, ergonomic web framework built on top of the Tokio runtime. It balances ease of use with high performance and supports modern web needs like SSR (with Leptos), WebSockets, and layered middleware. Its growing popularity is due to its simplicity, composability, and integration with the broader Tokio ecosystem.
\\nAxum is a very robust web framework. It doesn’t expose much of its low-level implementation, making Axum less complex and friendly to new developers in the community. As a result, you won’t see many scary generics compared to Actix as Axum abstracts most of them away. However, Axum is highly capable and supports many modern web APIs like HTTP/2, WebSockets, etc.
\\nAxum makes building middleware a lot easier for beginners compared to Actix Web. This article about JWT authentication with Axum will help you learn more about how Axum’s middleware system works.
\\nHere is a simple server setup code using Axum that returns a Happy Birthday message:
\\nuse axum::{extract::Path, routing::get, Router};\\nasync fn birthday(Path((name, age)): Path<(String, u8)>) -> String {\\n format!(\\n \\"Yayyy, {}, you are {} years old! Happy Birthday to you.\\",\\n name, age\\n )\\n}\\n#[tokio::main]\\nasync fn main() {\\n let app = Router::new().route(\\"/birthday/:name/:age\\", get(birthday));\\n let listener = tokio::net::TcpListener::bind(\\"0.0.0.0:3000\\").await.unwrap();\\n axum::serve(listener, app).await.unwrap();\\n}\\n\\n
Axum’s community is large and growing, as of the time of writing this article, it stands at 18.3k stars.
\\nCot is a cutting-edge web framework for building high-performance, async-first APIs in Rust. Built on top of Tokio and Hyper, Cot prioritizes modularity and speed over batteries-included features. It’s ideal for developers who want maximum control over their request pipeline and are comfortable assembling tools à la carte.
\\nCot doesn’t include an ORM, templating, or database layer — instead, it encourages modular design. You build from primitives and plug in only what you need.
\\nHere’s how to start a basic Cot app:
\\nuse cot::{App, Response};\\n\\nfn main() {\\n App::new()\\n .get(\\"/\\", |_req| async { Response::text(\\"Hello, Cot!\\") })\\n .serve(\\"127.0.0.1:8080\\")\\n .unwrap();\\n}\\n\\n
To get started, add Cot to your Cargo.toml
:
[dependencies]\\ncot = \\"0.1\\"\\n\\n
As of 2025, Cot is still early in development but is gaining attention for its performance and simplicity, with over 1k GitHub stars.
\\nWarp is designed to be fast, lightweight, and composable. It’s easy to get started with it as soon as possible and start building highly performant APIs.
\\nTo further illustrate how easy it is to get started with Warp, here is a simple API mimicking the same Happy Birthday example we’ve been using. It looks much shorter now and still easy to understand:
\\nuse warp::Filter;\\n\\n#[tokio::main]\\nasync fn main() {\\n let birthday = warp::path!(\\"birthday\\" / String / u8)\\n .map(|name, age| {\\n format!(\\"Yayyy, {}, you are {} years old! Happy Birthday to you.\\", name, age)\\n });\\n warp::serve(birthday)\\n .run(([127, 0, 0, 1], 3030))\\n .await;\\n}\\n\\n
As of the time of this writing, Warp has over 9.5k stars on GitHub and its developer community continues to grow!
\\nGotham is a flexible web framework built for stable Rust that promotes “stability, safety, security, and speed.” It provides async support with the help of Tokio and hyper out of the box.
\\nGotham supports routing, extractors (type-safe data requests), middleware, state sharing, and testing. Unfortunately, it does not have structure, boilerplate, database support, or anything else you need to build a full-fleshed web application.
\\nHere is a simple Happy Birthday app with Gotham:
\\nextern crate gotham;\\nuse gotham::helpers::http::response::create_response;\\nuse gotham::hyper::{Body, Response, StatusCode};\\nuse gotham::state::State;\\nfn birthday(state: State) -> (State, Response<Body>) {\\n let res = format!(\\"Happy Birthday! 🎉\\");\\n let body = create_response(&state, StatusCode::OK, gotham::mime::TEXT_PLAIN, res);\\n (state, body)\\n}\\nfn main() {\\n gotham::start(\\"127.0.0.1:7878\\", || Ok(birthday));\\n}\\n\\n
Loco brings a Django-inspired full-stack experience to Rust. It includes a built-in ORM, generators, templating, and project scaffolding. If you’re coming from Rails or Django and want similar ergonomics with Rust’s safety and performance, Loco is a promising new entrant in 2025.
\\nLoco emphasizes convention over configuration, includes a powerful CLI, and features a built-in ORM, migration system, templating, and more.
\\nWith Loco, you can scaffold a new project and start building right away:
\\ncargo install loco-cli\\nloco new my-loco-app\\ncd my-loco-app\\nloco dev\\n\\n
Loco’s project layout will feel familiar to developers coming from Rails or Django, with clear directories for models, controllers, and views.
\\nExample route handler in Loco:
\\n// src/controllers/hello.rs\\nuse loco_rs::prelude::*;\\n\\npub async fn greet(_req: Request) -> Result {\\n Ok(Response::text(\\"Hello from Loco!\\"))\\n}\\n\\n
You can define routes in routes.rs
and use Loco’s CLI to generate models, migrations, and more. The framework is gaining momentum and currently has about 1k GitHub stars.
Rouille is a microweb framework that employs a linear request and response design via a listening socket that parses HTTP requests.
\\nIt is built for simple applications and it doesn’t come with batteries included, however, it is agnostic enough to allow you to use any library you wish to integrate into your project. For example, because it doesn’t have native middleware support, you can easily integrate the Hyper Middleware library to handle your middleware system.
\\nRouille is still very young and currently has about 1.1k GitHub stars. Here is a simple Hello World application built with Rouille:
\\nuse rouille::Request;\\nuse rouille::Response;\\n\\nrouille::start_server(\\"0.0.0.0:80\\", move |request| {\\n Response::text(\\"hello world\\")\\n});\\n\\n
Thruster is a fast, middleware-based Rust web framework inspired by the layering and design of Koa and Express.js. It is SSL-ready, secure, intuitive, and testable.
\\nThruster is built to accommodate async/await and provides support for middleware, error handling, databases, and testing.
\\nThruster is as young as the Rouille framework and the community is not very big yet.
\\nHere is a quick Happy Birthday code overview of how Thruster works:
\\nuse thruster::{m, middleware_fn, App, BasicContext as Ctx, MiddlewareNext, Request, Server};\\nuse thruster::{MiddlewareResult, ThrusterServer};\\n#[middleware_fn]\\nasync fn birthday(mut context: Ctx, _next: MiddlewareNext<Ctx>) -> MiddlewareResult<Ctx> {\\n let mut context = Ctx::default();\\n context.body(\\"Happy Birthday! 🎉\\");\\n Ok(context)\\n}\\n#[tokio::main]\\nasync fn main() {\\n let mut app = App::<Request, Ctx, ()>::new_basic()\\n .get(\\"/hello\\", m!(birthday));\\n let server = Server::new(app);\\n server.build(\\"0.0.0.0\\", 4321).await;\\n}\\n\\n
Tide is a minimal framework similar to Express.js (Node.js), Sinatra (Ruby), and Flask (Python) for rapid development that promotes asynchronously building web apps. It has most of the functionalities you’ll find in most mature web frameworks including, routing, auth, socket, log, template engines, middleware, testing, and other utilities.
\\nTide uses async-std
, which is built for speed and safety for its asynchronous implementation. Tide has a huge growing community compared to Thruster and Rouille — it has about 5k stars on GitHub as well.
Here is a simple Happy Birthday application with Tide:
\\nuse tide::Request;\\nasync fn birthday(_req: Request<()>) -> tide::Result<String> {\\n Ok(\\"Happy Birthday! 🎉\\".into())\\n}\\n#[async_std::main]\\nasync fn main() -> tide::Result<()> {\\n let mut app = tide::new();\\n app.at(\\"/birthday\\").get(birthday);\\n app.listen(\\"127.0.0.1:8080\\").await?;\\n Ok(())\\n}\\n\\n
Dropshot is a simple and lightweight server-side library for creating REST APIs. It has support for OpenAPI specs, async, and logging. You’ll like it if you just want to expose a few APIs in your project. It’s not a full web framework (at least, I won’t consider it to be one) for all web development use cases.
\\nDropshot has less than 1,000 GitHub stars as of the time of this writing.
\\nActix Web, Rocket, Axum, warp, Leptos, Cot, and Loco are all popular web frameworks for Rust, each with its own unique features and strengths. Here’s a closer look at the commonalities found for each of them:
\\nFeatures | \\nExplanation | \\nActix Web | \\nRocket | \\nAxum | \\nwarp | \\nLeptos | \\nCot | \\nLoco | \\n
---|---|---|---|---|---|---|---|---|
Async/await | \\nModern async support for performance and scalability | \\n✅ | \\n✅ | \\n✅ | \\n✅ | \\n✅ | \\n✅ | \\n✅ | \\n
Middleware | \\nCustomizable request/response handlers | \\n✅ | \\n✅ | \\n✅ | \\n✅ | \\n✅ | \\n✅ | \\n✅ | \\n
WebSockets | \\nBuilt-in or optional crate support | \\n✅ | \\n✅ | \\n✅ | \\n✅ | \\n✅ | \\n✅ | \\n✅ | \\n
SSR/Frontend integration | \\nSupport for server-side rendering or frontend pairing | \\n❌ | \\n❌ | \\n✅ (via Leptos) | \\n❌ | \\n✅ | \\n❌ | \\n✅ | \\n
ORM / DB support | \\nFirst-party or integrated ORM/database tooling | \\n✅ | \\n✅ | \\n✅ | \\n✅ | \\n❌ | \\n❌ | \\n✅ | \\n
GitHub stars (as of June 2025) | \\nCommunity size indicator | \\n21K+ | \\n25K+ | \\n19K+ | \\n10K+ | \\n3K+ | \\n1K+ | \\n1K+ | \\n
Real-world usage insights can help when choosing the right framework. In a recent Reddit thread discussing Rust web frameworks, developers shared the following:
\\nIn this article, we discussed the different Rust web frameworks. This list is by no means exhaustive. However, it covers the most popular web frameworks. While the choice of which framework to use will largely depend on your project requirement, this list should give you a head start to further your research.
\\nGood luck!
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nIn this post, I’ll show you how to build a simple weather app using Claude. The app displays a weather forecast based on the selected city, and we’ll walk through the entire development process—from setting up the infrastructure to building the frontend.
\\nTo view the final project, check out my GitHub repo.
\\nHere’s what our app will look like:
\\nBefore we begin using Claude, it may help to generally understand how it works.
\\nSimilar to ChatGPT and the other AI assistants, Claude operates through an interface where you can ask it questions. The process of asking is typically called prompting. Some even call it prompt engineering, where you build by interacting with an AI assistant through asking questions in a natural language, vs. writing code.
\\nTaking this a step further, you can also use Claude to conduct “vibe coding.” A difference between what I’m presenting here and vibe coding is that I have a pre-set tech stack chosen ahead of time. If I were to do the same with vibe coding, I would focus more on the conceptual tasks instead of specifying a tech stack. Vibe coding is definitely relevant, and you can use Claude to help do that as well. It’s just not exactly what I’m doing here.
\\nClaude can help with a variety of tasks, including general question and answer, writing, software development, data analysis, and even actual conversations:
\\nBehind the scenes, Claude uses a Large Language Model (LLM), which is an artificial neural network that has been trained to make decisions and provide artificial intelligence:
\\n\\n
An explanation of artificial neural networks is well beyond the scope of this post. Basically, it’s a data structure that attempts to mimic the way the human brain makes decisions.
\\nLLMs are large datasets that have been built based on training data. When LLMs are “trained,” connections between different points of data are weighted. Ultimately, when you interact with Claude, it traverses the data points within the LLM to make a decision or return a response. The models that are currently used with Claude are the product of much development and training.
\\nWhen you work with Claude today, you can use different LLMs, including Claude 3.7 Sonnet, Claude 3.5 Haiku, Claude 3 Opus, and Claude 3.5 Sonnet.
\\nTo start using Claude to help you build a web app (or really any application), it helps to have already started conceptualizing what you want.
\\nYou can just start a conversation with Claude and ask what it would suggest. This helps if you are not sure of the best technology to use, or the best implementation approach. As I said in the intro, I’m going to build a weather application, so I’ll start with a prompt like the following:
\\nRight away, you notice that it suggested a JavaScript framework and to use a web app. It also listed some popular weather APIs we could use. It provided a list of features to consider, and even asked if you wanted a basic React Component that could operate as an example.
\\nI could’ve started with a list of requirements as well, but I’ve also found it helpful to ask open-ended questions to start. Claude may have considered some things that you might not have.
\\nFrom here, my usual process is to:
\\nAll along the way, if I encounter an issue, I will usually ask Claude for help with that specific task before proceeding. In the next few sections I’ll go through this process for start all the way through the finishing tweaks.
\\n\\nBased on Claude’s suggestion when I asked about building a weather app, I am next going to write out a formal list of requirements. I’ll then pass these requirements onto Claude to actually do the scaffolding. From the initial suggestion, I narrowed down my requirements to the following:
\\nI take this information and prompt Claude:
\\nWith that prompt, Claude then builds an Artifact. According to the official docs, Artifacts “allow Claude to share substantial, standalone content with you in a dedicated window separate from the main conversation.”
\\nYou’ll see the Artifact next to the prompt screen:
\\nNotice also in this experience that Claude is remembering the conversation. This is super helpful because you can reference things you discussed earlier. Once the Artifact is created, you’ll note that Claude also lists a feature list.
\\nThe great part about building with Claude is that it is a conversation. You can continually tweak the same part of the project or go back and forth between sections of code.
\\nI’d like to make a tweak to the implementation’s styles. I noticed that it looks like the CSS is done with Tailwind, wheras I’d like to use just a standalone CSS file.
\\nI’ll ask Claude to refactor what it has generated to not use Tailwind:
\\nWithin what it refactored, it looks like it decided to remove the dependencies for the weather icons as well, and it generated SVGs for the weather conditions:
\\nIt even listed the benefits of this approach:
\\nWith the component scaffolded out, I’d like to go ahead and start building the application. When I use Claude to help build projects, I typically build out a scaffolded body and then add in what Claude refactors or generates for me.
\\nThis is different depending on the stack (i.e. .NET web API vs. React Frontend etc.). Sometimes this can be a bit tedious, but one trick I’ve learned is to use commits . With commits, you can see what changed between iterations and potentially undo a change if you need to.
\\nTo start, I go ahead and ask Claude to help me do the initial scaffolding:
\\nInitially, Claude creates the main parts of the React project like the App.js
file, the package.json
and others. Then Claude shows me how to leverage CLIs to set up the project:
Looking at Claude’s implementation steps, I noticed that it is using Create React App. This project is no longer maintained, so I’m going to ask Claude to refactor that setup to use something other than Create React App:
\\nClaude agrees and redoes the setup using Vite. In addition to redoing the setup process, Claude also highlights the advantages of using Vite:
\\nAt the end of the response, Claude puts the setup commands in a set that could be converted into a shell script or run individually:
\\nWith this response, I can now start building the application. I also am a fan of prettier for JavaScript or TypeScript projects. I know how to do this, but for fun, I went ahead and asked Claude if it could help me with the setup:
\\nClaude provided a set of instructions and files to be changed to properly set up Prettier. Claude also recommended (and showed how to) integrate with ESLint, and even use Husky for pre-commit hooks. I setup ESLint, but didn’t go with Husky since this is such a simple project.
\\nWith all of this setup, I created an API key with OpenWeatherMap and then attempted to run it locally. I ran into the following error:
\\nBased on this error, I copied the message into Claude to see if it could help me:
\\nReading through Claude’s changes, I realized that the issue is just that my WeatherComponent was not saved as with a .jsx
extension. I asked Claude about this, and it confirmed:
With that, I tried again to run the application with my API key from OpenWeatherMap and encountered an error message:
\\nIt looks like there is an issue with the use of the onecall
endpoint. I took the error message and asked Claude to help figure this out:
In this case, Claude went out to the internet and did a web search on OpenWeatherMap’s API documents. Claude confirmed we were using the wrong endpoint and refactored the API call to use the forecast
endpoint:
I also noted that in the refactored fix, Claude used a .jsx
extension on the file, as it remembered that we had the .js
error from before.
With the refactored component that Claude created, I was able to get it working:
\\nOne last tweak I want to make is that I noticed the city dropdown does not show what you selected when you select a city. So I’m going to ask Claude to refactor the component to show that:
\\nWith those changes in place, the city name is shown correctly:
\\nWith that, I have built my application. The next steps would normally be to work through a deployment process and build out a CI/CD pipeline. For the purpose of this post, I’m going to stop here, and instead talk about my general experiences with Claude and a few other things that you could look into.
\\nI’ve used Claude to build projects and help debug very specific issues. In most cases, I’ve found to be very helpful. It doesn’t necessarily replace my work, but rather augments what I do and enables me to focus on real problems.
\\nClaude (and AI Assistants in general) are really helpful with mundane tasks that would otherwise eat up time. AI assistants are also great in helping to answer quick questions or to help suggest solutions to new problems.
\\nIt’s easy to get lost in all the changes. Claude does retain a conversation history, but it can also frequently make large sweeping changes.
\\nIn your prompts, it helps to be very specific. You can even ask for it to only focus on one set of changes. I’ve also found it very helpful to commit changes I make from Claude to a branch, so I can actively observe what is happening and undo something if it breaks.
\\nIf you look at the commits on the Master branch of the example project you’ll see:
\\nIf you are trying to get Claude to help you with a larger, more complex project, it may suggest things that do not integrate correctly. This happened to me when I was working through an Azure Function written in the isolated model.
\\nWith the Azure Function, it was attempting to use the in–process model, which requires different bindings. This is all very specific, but that’s my point. It is important to provide Claude as much context as possible when doing development.
\\nI’ve also found that Claude can be very helpful with architectural questions and even includes diagrams. In the following, I was asking about microfrontend caching:
\\nClaude’s chat history is also of great value. You can go back in your chats and retrieve a conversation you had months ago if you forgot what was done. A shared context makes working through problems easier, and at times even more fun.
\\nIt’s better to use Claude to help with development vs. having it do all the development for you. You can certainly get Claude to build all the parts of an application. But as a developer, you also need to understand individual parts. Working directly on the code always helps.
\\nOverall, Claude is a great tool for software engineers. There are often times when you need to build a simple data structure or piece of functionality that Claude can help you quickly scaffold. As I said earlier, it is important to provide context and make your prompts as specific as possible.
\\nWhen working with Claude, I find myself learning new things and iterating faster on problems.
\\nI’ve personally had great experiences using Claude. The Artifacts feature is particularly helpful, as you can easily see the code as you develop your application alongside Claude. In this post, I only touched on the basics and built a very simple application. You can use Claude to build a more robust architected system. The important part is to always provide as much context as you can in the prompts.
\\nAnd it doesn’t stop with Claude, either. Anthropic has just released Claude Code, an Agentic command-line-based tool that developers can use to delegate tasks like refactoring or making simple changes.
\\nClaude really helps with routine tasks and gives developers more time to focus on bigger problems their teams face. I encourage you to look at my sample project, and try out Claude for yourself.
\\nThanks for reading my post!
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nOne of the biggest challenges in frontend development is delivering high-quality UIs under tight deadlines. Clients want apps that are faster, shinier, and production-ready yesterday.
\\nThere are just too many UI libraries and frameworks. Will you use Tailwind CSS, MUI, Bootstrap, SCSS, or just vanilla CSS? Maybe you want to use Tailwind, but the client keeps messaging, “How’s the development going?”. A meeting is coming up. You don’t know what to do. Which CSS framework should you choose?
\\nIn this article, we’ll cover six CSS frameworks and libraries that were either released or gained traction after 2024. These tools can help you move faster, build cleaner interfaces, and deliver on tight deadlines.
\\nBeer CSS is a CSS framework based on Material Design 3 principles. You might find yourself needing to build a web app with Material Design, but you don’t have enough time to create a consistent system from scratch. Beer CSS solves this by offering sleek, ready-to-use components that minimize setup time.
\\nBeer CSS shines when you need to implement Material Design quickly and consistently, without writing everything from scratch. It reduces bloat by offering prebuilt, modular components and eliminates the need for a heavy setup thanks to its CDN-first approach.
\\nTo include Beer CSS in your project, add the following to index.html:
\\n<link href=\\"https://cdn.jsdelivr.net/npm/[email protected]/dist/cdn/beer.min.css\\" rel=\\"stylesheet\\">\\n<script type=\\"module\\" src=\\"https://cdn.jsdelivr.net/npm/[email protected]/dist/cdn/beer.min.js\\"></script>\\n<script type=\\"module\\" src=\\"https://cdn.jsdelivr.net/npm/[email protected]/dist/cdn/material-dynamic-colors.min.js\\"></script>\\n\\n
Once added, you can immediately use Beer CSS components. For instance, we can create a new vite-react app via npm create vite@latest
and add components directly into App.jsx:
function App() {\\n return (\\n <>\\n <button>\\n <i>home</i>\\n <span>Button</span>\\n </button>\\n <progress class=\\"circle\\"></progress>\\n <label class=\\"slider\\">\\n <input type=\\"range\\" value=\\"25\\" />\\n <input type=\\"range\\" value=\\"50\\" />\\n <span></span>\\n </label>\\n </>\\n );\\n}\\n\\nexport default App;\\n\\n
This code will add a button, a circular progress bar, and a slider to show how those elements look on the page:
\\n\\n
When using JSX, replace class
with className
and close all self-closing tags (e.g., <input />
, <img />
).
Daisy UI is a Tailwind CSS plugin designed to make development faster, cleaner, and easier. It offers a collection of component class names (like card
or hero
), allowing developers to build common UI elements quickly without writing long strings of utility classes like you would with vanilla Tailwind.
Daisy UI solves one of the biggest hurdles in Tailwind CSS: verbosity. While Tailwind is powerful, building complex components often requires long strings of utility classes. DaisyUI abstracts those patterns into readable class names like card
, significantly improving readability and development speed.
In this tutorial, we’ll create a Vite + React app and install DaisyUI. You can also follow the official installation page for other frameworks.
\\nTo create a Vite + React app:
\\nnpm create vite@latest ./ -- --template react\\n\\n
Then install Tailwind CSS and DaisyUI:
\\nAdd Tailwind CSS to vite.config.js
:
import { defineConfig } from \\"vite\\";\\nimport tailwindcss from \\"@tailwindcss/vite\\";\\nimport react from \\"@vitejs/plugin-react\\";\\n\\nexport default defineConfig({\\n plugins: [tailwindcss(), react()],\\n});\\n\\n
Replace contents of App.css
with:
@import \\"tailwindcss\\";\\n@plugin \\"daisyui\\";\\n\\n
With DaisyUI installed, you can start using its component classes in your HTML or JSX.
\\nFor example, creating a visually appealing Hero section is straightforward.
\\nIn a React component (App.jsx
):
<div\\n className=\\"hero min-h-screen\\"\\n style={{\\n backgroundImage:\\n \\"url(https://img.daisyui.com/images/stock/photo-1507358522600-9f71e620c44e.webp)\\",\\n }}\\n>\\n <div className=\\"hero-overlay\\"></div>\\n <div className=\\"hero-content text-neutral-content text-center\\">\\n <div className=\\"max-w-md\\">\\n <h1 className=\\"mb-5 text-5xl font-bold\\">Hello there</h1>\\n <p className=\\"mb-5\\">\\n Provident cupiditate voluptatem et in. Quaerat fugiat ut assumenda excepturi exercitationem\\n quasi. In deleniti eaque aut repudiandae et a id nisi.\\n </p>\\n <button className=\\"btn btn-primary\\">Get Started</button>\\n </div>\\n </div>\\n</div>\\n\\n
You’ll see a polished hero section rendered in seconds:
\\nDaisy UI speeds up development, but customizing components still requires a working knowledge of Tailwind CSS.
\\nCirrus UI is a simple, modular, and responsive SCSS framework that aims to accelerate the developer’s work through pre-built components and utility classes. It offers a streamlined developer experience, enabling rapid prototyping and custom design without starting from scratch or managing complex CSS architecture.
\\nCirrus UI helps solve common frontend challenges such as repetitive CSS patterns and maintaining design consistency. By providing a suite of pre-styled components and utilities, it significantly reduces the time required for prototyping and iteration.
\\nIts modular architecture also helps avoid unnecessary bloat; developers can include only what they need. This leads to faster load times, a cleaner codebase, and easier long-term maintenance. Sass integration adds another layer of customization, empowering teams to implement deeply personalized designs without rewriting core styles.
\\nCirrus is a CSS framework like Tailwind or Bootstrap.
\\nIf you’re using npm (a package manager for JavaScript), you can install Cirrus with this command:
\\nnpm i cirrus-ui --save\\n
After installing, you need to import it so your project knows to use Cirrus’ styles.
\\nIf you’re using a tool like Vite, in your main JS/JSX file (main.js
or main.jsx
), write:
import \\"cirrus-ui\\";\\n
Once it’s set up, try using some built-in styles.
\\n\\nIn this example, the developer writes a simple React component to test Cirrus-styled buttons:
\\nfunction App() {\\n return (\\n <>\\n <button class=\\"btn btn-plain\\">Plain</button>\\n <button class=\\"btn btn-transparent\\">transparent</button>\\n <button class=\\"btn btn-light\\">light</button>\\n <button class=\\"btn btn-dark\\">dark</button>\\n <button class=\\"btn btn-black\\">black</button>\\n <button class=\\"btn btn-primary\\">primary</button>\\n <button class=\\"btn btn-link\\">link</button>\\n <button class=\\"btn btn-info\\">info</button>\\n <button class=\\"btn btn-success\\">success</button>\\n <button class=\\"btn btn-warning\\">warning</button>\\n <button class=\\"btn btn-danger\\">danger</button>\\n </>\\n );\\n}\\n\\nexport default App;\\n\\n
We will be presented with the following page:
\\nThis renders a clean, fully styled button set right out of the box, a great starting point for any SCSS-based UI.
\\nHalfmoon is an open source, responsive CSS framework that positions itself as a highly-customizable, drop-in replacement for Bootstrap. It stands out for its heavy use of CSS variables for theming, built-in dark mode, and multiple core themes, all while leveraging the familiar Bootstrap ecosystem.
\\nHalfmoon provides a unique aesthetic layered on top of the Bootstrap foundation, offering more direct control through CSS variables. It’s a good fit for developers who want to benefit from Bootstrap’s component ecosystem but need a more customizable and themeable framework.
\\nIt also solves a longstanding Bootstrap pain point: building responsive, functional sidebar layouts. With Halfmoon’s dedicated sidebar component, you can skip third-party workarounds and reduce custom layout code.
\\nFor this tutorial, we’ll use a vanilla JavaScript project with a Vite template. Once the project is set up, follow these steps:
\\nnpm i [email protected]\\n
main.js
import \\"halfmoon/css/halfmoon.min.css\\";\\n
Since Halfmoon doesn’t include its own JavaScript, it relies on Bootstrap for component behavior. Add the following lines to your HTML:
\\n<link href=\\"https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css\\" rel=\\"stylesheet\\" integrity=\\"sha384-...\\" crossorigin=\\"anonymous\\">\\n<script src=\\"https://cdn.jsdelivr.net/npm/[email protected]/dist/js/bootstrap.bundle.min.js\\" integrity=\\"sha384-...\\" crossorigin=\\"anonymous\\"></script>\\n\\n
Halfmoon uses Bootstrap under the hood for JavaScript-driven components like modals and dropdowns. Without it, these features won’t work.
\\nLet’s render some buttons to test that everything is working. In your main.js
file, add the following:
import \\"halfmoon/css/halfmoon.min.css\\";\\n\\ndocument.querySelector(\\"#app\\").innerHTML = `\\n <div>\\n <button type=\\"button\\" class=\\"btn btn-success\\">Success</button>\\n <button type=\\"button\\" class=\\"btn btn-danger\\">Danger</button>\\n <button type=\\"button\\" class=\\"btn btn-warning\\">Warning</button>\\n <button type=\\"button\\" class=\\"btn btn-info\\">Info</button>\\n <button type=\\"button\\" class=\\"btn btn-light\\">Light</button>\\n <button type=\\"button\\" class=\\"btn btn-dark\\">Dark</button>\\n <button type=\\"button\\" class=\\"btn btn-link\\">Link</button>\\n </div>\\n`;\\n\\nsetupCounter(document.querySelector(\\"#counter\\"));\\n\\n
We will see the following output:
\\nYou’ll quickly see why it feels familiar to Bootstrap, and why Halfmoon markets itself as a streamlined alternative.
\\nHalfmoon depends on Bootstrap’s JS runtime. Always verify that Bootstrap is correctly loaded in projects using Halfmoon.
\\n\\nPico CSS is a minimalist, lightweight CSS framework designed around semantic HTML. Its core philosophy is to style native HTML elements, such as <button>
, <table>
, <nav>
, and <article>
, so they appear elegant and responsive by default, without the need for extra utility classes or custom CSS.
\\n
With its motto, “Write HTML, add Pico CSS, and voilà!”, Pico offers a clean, accessible UI with minimal effort.
\\nPico tackles two common front-end pain points: overly complex CSS frameworks and cluttered HTML filled with utility classes. Instead of forcing you to style every element manually, Pico provides a lightweight foundation that looks polished out of the box.This encourages developers to focus on writing semantic, well-structured HTML, without sacrificing design.
\\nGetting started with Pico CSS reflects its minimalist philosophy. Just add the following CDN link to the of your HTML file:
\\n<link\\n rel=\\"stylesheet\\"\\n href=\\"https://cdn.jsdelivr.net/npm/@picocss/pico@2/css/pico.min.css\\"\\n/>\\n\\n
That’s it! No JavaScript, build tools, or extra configuration required.
\\nOnce the stylesheet is linked, Pico automatically applies styles to standard HTML elements. For example, to see how buttons are rendered, you can add:
\\n<button class=\\"secondary\\">Secondary</button>\\n<button class=\\"contrast\\">Contrast</button>\\n\\n
These buttons are styled immediately, without any additional setup
\\nCodeStich takes a different approach from other frameworks in this list. Instead of offering components and utilities for a skeleton UI, it provides a curated library of pre-built pages that serve specific use cases. These come with ready-to-use HTML, CSS, and JavaScript snippets.The core value: developers can quickly assemble production-ready web pages by copying and pasting, visually appealing blocks that are already “stitched together”.
\\nCodeStitch primarily addresses the need for extremely fast prototyping. It’s especially useful for freelance developers or agencies that frequently handle similar types of projects and need to jump into customization quickly. For teams facing tight deadlines or those who prefer not to reinvent common sections like contact forms, feature lists, or headers. CodeStitch provides a shortcut by offering production-ready code blocks that can be copied, pasted, and adapted with ease.
\\nGetting started with CodeStitch doesn’t involve installation in the traditional sense. Instead, it’s all about browsing their online library and copying the code you need.For example, if you find a testimonials section you like, you would copy the HTML markup into your HTML or JSX file and the corresponding CSS (or SCSS/LESS) into your stylesheet, selecting the format that best fits your project.Let’s say you are building a gym website for a client. Rather than designing each section from scratch and assembling components manually, you can browse CodeStitch’s templates, choose one that fits, and start customizing right away. A polished layout can be up and running in seconds:
\\n
Because the code is given next to it:
\\nIf we click to the Get Code button, we’ll see the following page:
\\n\\n
Everything in the CodeStitch library is completely free to use and customize. You can easily modify any layout or section to fit your project’s branding or structure.CodeStitch is not just for full-page dashboards. For instance, if you want to add a testimonials section, simply navigate to the Reviews category in the sidebar, choose a layout you like, and customize as needed:
\\nThe code for it:
\\nCodeStitch provides raw snippets, so it’s up to you to adapt them to your framework. For instance, when using React, you’d have to convert standard HTML attributes like class
to className
and properly close self-closing tags like <img />
and <input />
.
Phew, what a ride! We just explored six modern CSS frameworks that offer fresh, efficient approaches to front-end developers in 2025.
\\nFrom the Material Design simplicity of Beer CSS, to the Tailwind-powered components of Daisy UI, the Sass-powered structure of Cirrus UI, the Bootstrap compatible customization of Halfmoon, the semantic elegance of Pico CSS, to the ready-to-ship templates of CodeStitch – each framework provides a unique path to building faster, more maintainable UIs.
\\nEach tool has its strengths. Whether you’re looking for prebuilt components, granular utility control, minimalist styling, or drag-and-drop sections, there’s something here to match your workflow and development style.
\\nThe “best” framework isn’t one-size-fits-all; it’s the one that fits your project’s needs, your team’s familiarity, and your timeline. Expanding your toolkit with modern CSS options doesn’t just make development easier; it makes you more adaptable and efficient.
\\nWe hope this list has given you a few new tools to explore and maybe even a favorite to carry into your next project. Now go forth and strengthen your CSS toolkit, and build faster, cleaner, and more polished web experiences in 2025.
\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nNextResponse.next()
\\n NextResponse.redirect(url)
\\n NextResponse.rewrite(url)
\\n NextResponse(body, options)
\\n Next.js middleware lets you run code before a request finishes and update the response. Alongside Edge Functions, it’s a powerful tool that enables developers to achieve enhanced functionality and performance.
\\nMiddleware was introduced in Next.js v12 and has been improved in consecutive versions. Starting with Next.js v13, middleware can be used to respond directly to requests without going through the route handler. This can improve performance and security. Middleware can also be used to work with Vercel Edge Functions.
\\nEdge Functions allow you to run code at the network’s edge. So, in this post, we’ll learn how middleware works with Edge Functions and why it’s important to know. Now, let’s get started!
\\nMiddleware in Next.js is a piece of code that allows you to perform actions before a request is completed and modify the response accordingly. It bridges the incoming request and your application, providing flexibility and control over the request/response flow.
\\nIn Next.js v12, middleware was introduced to enhance the Next.js router based on user feedback. In the later versions, middleware gained more popularity and improved its DX as well. In version v15, middleware now supports the Node.js runtime as well.
\\nWith middleware, you can seamlessly integrate custom logic into the request/response pipeline, allowing you to tailor your application’s behavior to specific requirements. It lets you modify requests and responses, making it easier to enrich headers, implement URL redirects, or keep track of incoming and outgoing requests. It’s a versatile tool that can improve your web application’s performance, security, and functionality.
\\nEditor’s note: This post was updated by Abhinav Anshul in May 2025 to align with Next.js v15, and include practical use cases for both middleware and Edge Functions.
\\nLet’s create a middleware and add a piece of code to show how it works. To use middleware, you need to create a file called middleware.js
or middleware.ts
at the same level as your app
directory. Unlike Next 12, where we have to add an underscore, Next v15 doesn’t require that. This file will contain the code you want to run before a request is completed.
Inside the file, paste this:
\\n// middleware.ts\\nimport { NextResponse } from \\"next/server\\";\\nimport type { NextRequest } from \\"next/server\\";\\n\\nexport function middleware(req: NextRequest) {\\n const { pathname } = req.nextUrl;\\n\\n // Restrict the /api/hello endpoint to only POST\\n if (pathname === \\"/api/hello\\") {\\n if (req.method !== \\"POST\\") {\\n return new NextResponse(\\n `Cannot access this endpoint with ${req.method}`,\\n { status: 400 }\\n );\\n }\\n }\\n\\n return NextResponse.next();\\n}\\n\\n
Middleware functions are crucial in processing requests and responses in a web application. Middleware works by intercepting requests and responses before they are passed to the application’s core code.
\\nOne of the major use cases of middleware is authentication. It provides a way to verify the user’s identity before granting them access to the application. By authenticating users, middleware ensures that only authorized individuals can access protected resources, enhancing the application’s overall security.
\\nAuthorization is another key feature of middleware. Once users are authenticated, middleware can implement access control based on their roles or permissions. This means certain pages or resources within the application can be restricted to specific users or user groups.
\\nAdditionally, middleware can use caching mechanisms to optimize the application’s performance. By storing frequently accessed data in a cache, middleware reduces the need to fetch it from the underlying database repeatedly.
\\nHowever, unlike API routes, middleware does not return HTML, UI rendering, or any kind of JSON structure. Instead, it only processes a request through the application. This is by design and not exactly a limitation.
\\nHere are the actions middleware can actually take:
\\nNextResponse.next()
You can add a check or a condition before allowing middleware to reach to its usual destination. This check could be ensuring you have a user token, data, or any logical condition that you might wanna stop the middleware normal flow if that condition is not met.
\\n\\nFor example, here I am ensuring the user is valid. Only the NextResponse.next()
would be called, and thus middleware continues:
if (isValidUser) {\\n return NextResponse.next();\\n}\\n\\n
NextResponse.redirect(url)
This is essentially used for sending the user to a different URL
based on certain conditions. Here, if the user is not logged in, then this middleware would redirect the user to a login page. Now, you can also achieve this by adding a client-side logic.
But this approach brings security concerns, and you will see a flash of the page before the redirection. So middleware suits here the best:
\\nif (!isLoggedIn) {\\n return NextResponse.redirect(new URL(\\"/login\\", request.url));\\n}\\n\\n
NextResponse.rewrite(url)
This will rewrite the request path without changing the URL in the browser. It can be useful for testing purposes or even serving content from a different server/path.
\\nFor example, here you are checking the locale
for the language. If it’s German (de
), then it will append the de
and serve the content in German:
if (locale === \\"de\\") {\\n return NextResponse.rewrite(new URL(\\"/fr\\" + request.nextUrl.pathname, request.url));\\n}\\n\\n
NextResponse(body, options)
Again, this is a constructor that will stop the request from proceeding further. Here you can send HTML, JSON, or plain text — but strictly no server-rendered content:
\\nif (isRateExceeded) {\\n return new NextResponse(\\"Too many request, Please try after some time\\", { status: 429 });\\n}\\n\\n
Middleware allows you to add functionality to your application without modifying the application code. This makes it a great way to add new features or fix bugs without deploying a new version of your application. Additionally, middleware can be used to scale your application by offloading tasks to other servers.
\\nWhile there are several advantages to using middleware, there are also downsides. Middleware can add complexity to your application. This can make developing, deploying, and maintaining your application more difficult.
\\nMiddleware can also add costs to your application. This is because you will need to purchase and maintain the cost of hosting the middleware on a server. In case of Next.js, using middleware means you are locked to the Next.js environment. If you have to move out of Next.js and create JS bundles and host them somewhere, it would be a bottleneck.
\\nLastly, middleware causes latency in your application. This is because middleware code needs to be executed before your application can process the request. To mitigate this, you might need to handle it with loaders or skeleton screens that might be substandard for a user experience.
\\nHere are a few scenarios where Next.js middleware can be used:
\\nNext.js middleware provides access to geographic information through the NextRequest
API’s geo
key. This allows you to serve location-specific content to users. For instance, if you’re building a website for a shoe company with branches in different regions, you can display trending shoes or exclusive offers based on the user’s location.
Using the cookies
key available in the NextRequest
API, you can set cookies and authenticate users on your site. You can also enhance security by blocking bots, users, or specific regions from accessing your site. By rewriting the request, you can redirect them to a designated blocked page or display a 404 error.
A/B testing involves serving different versions of a site to visitors to determine which version elicits the best response. Previously, developers performed A/B testing on the client side on static sites, resulting in slower processing and potential layout shifts.
\\nHowever, Next.js middleware enables server-side processing of user requests, making the process faster and eliminating layout shifts. A/B testing with middleware involves using cookies to assign users to specific buckets, and servers then redirect users to either the A or B version based on their assigned bucket.
\\nYou can use middleware to prevent abuses by implementing a rate limiter. Next.js middleware allows you to implement a lightweight basic rate limiter without the need for any traditional backend server or databases.
\\nIt can also be highly customizable based on IP, headers, etc. Underneath the hood, it tracks the request coming from a particular IP address. If the requests exceed a certain defined threshold, the middleware will block the upcoming requests.
\\nThe code above checks if the request URL matches the /api/hello
pattern. If it does, the code checks if the request method is a POST
method. If it is not, the code returns a 400 Bad Request
response with an error message that includes the request method.
If the request method is POST
, it calls the NextResponse.next()
function to tell Next.js to continue processing the request. In the case below, the API is being hit on the browser, hence likely to be a GET
request, therefore you will get this on your browser:
Now that you have an understanding of middleware, it’s only fair to talk about Edge Functions, which are deeply interwoven with middleware concepts. Let’s have a quick refresher on Edge functions and how they help.
\\nIf you’ve ever used serverless functions, you’ll understand the importance of Edge Functions. To better understand, we’ll compare Edge Functions to serverless functions.
\\nWhen you deploy a serverless function to Vercel, it’s deployed to a server somewhere in the world. The request made to that function will then execute where the server is located.
\\nIf a request is made to a server close to that location, it will happen quickly. But if a request is made to the server from a faraway location, the response will be much slower. This is where Edge Functions can help. Essentially, Edge Functions are serverless functions that run geographically close to a user, making the request very fast regardless of where that user might be.
\\nWhen deploying a Next.js application to Vercel, the middleware will deploy as Edge Functions to all regions worldwide. This means that instead of a function sitting on a server, it will sit on multiple servers. Here, the Edge Functions use middleware.
\\nOne of the unique things about Edge Functions is that they are much smaller than our typical serverless functions. They also run on V8 runtime, which makes them 100x faster than Node.js in containers or virtual machines.
\\nLet’s create a new API route, although Next.js comes with a default one, which is hello.js
. To do this, we’ll add a file directly in our pages/api
sub-folder of our project. Let’s paste this in:
import { NextResponse } from \'next/server\';\\n\\nexport const config = {\\n runtime: \'edge\', //This specifies the runtime environment that the middleware function will be executed in.\\n};\\n\\nexport default (request) => {\\n return NextResponse.json({\\n name: `Hello, from ${request.url} I\'m now an Edge Function!`,\\n });\\n};\\n\\n
You can then deploy to Vercel or any edge computing platform of your choice.
\\nUnderstanding how to use middleware with Edge Functions efficiently solves the issue of sharing a common login across multiple applications (commonly seen with authentication, bot protection, redirects, browser support, feature flags, A/B testing, server-side analytics, logging, and geolocations).
\\nWith the help of Edge Functions, middleware can run faster, like static web applications, because they help to reduce latency and eliminate cold startup.
\\nWith Edge Functions, we can run our files on multiple geolocations, allowing regions closest to the user to respond to the user’s request. This provides faster user requests regardless of their geographic location.
\\nTraditionally, web content is served from a CDN to an end user to increase speed. However, because these are static pages, we lose dynamic content. Also, we use server-side rendering to get dynamic content from the server, but we lose speed.
\\nHowever, deploying our middleware to the Edge like a CDN brings our server logic closer to our visitors’ origin. As a result, we have speed and personalization for the users. As a developer, you can now build and deploy your website and then cache the result in CDNs across the world.
\\nLet’s see how we can use middleware and Edge Functions. We’ll use the middleware to create a protected endpoint for the Edge Functions. First, open up the terminal and create a folder where we want our project installed:
\\nmkdir next_middleware\\n\\n
Then, cd
into the recently created folder, and install Next.js using this command:
npx create-next-app@latest\\n\\n
Give the project a name and choose other preferences during the installation process. Make sure you are opting for the app router for the latest Next.js features.
\\nNow that we’ve done that, let’s create our middleware. Let’s create a file called middleware.js
at the root as our app
directory. Next, let’s paste in the following:
// middleware.js\\nimport { NextResponse } from \\"next/server\\";\\nimport { NextRequest } from \\"next/server\\";\\nexport function middleware(request) {\\n// Get the admin cookie id from the request.\\n let adminCookieId = request.cookies.get(\\"adminCookieId\\")?.value;\\n// If the admin cookie id is not \\"abcdefg\\", redirect the user to the admin login page.\\n if (adminCookieId !== \\"abcdefg\\") {\\n return NextResponse.redirect(new URL(\\"/admin-login\\", request.url));\\n }\\n}\\nexport const config = {\\n matcher: \\"/api/protected/:path*\\",//The matcher specifies the URL patterns that this middleware will be applied to.\\n};\\n\\n
In the code above, middleware.js
defines a middleware function that can be used to protect a Next.js API endpoint. The middleware function checks the request for an adminCookieId
. If the adminCookieId
is not present or is not equal to \\"abcdefg\\"
, the middleware function redirects the user to the admin login page.
Let’s create our protected endpoint. Inside our pages/api
folder, create a protected
folder. Then, create an index.js
folder inside it and paste this:
// do this only if you are using the pages folder, this has been deprecated in the app router\\n//export const config = {\\n// runtime: \\"edge\\",\\n// };\\nexport default function handler(req, res) {\\n const { language } = req.query;\\n // Personalization logic based on user preferences\\n let greeting;\\n if (language === \\"en\\") {\\n greeting = \\"Hello! Welcome!\\";\\n } else if (language === \\"fr\\") {\\n greeting = \\"Bonjour! Bienvenue!\\";\\n } else if (language === \\"es\\") {\\n greeting = \\"¡Hola! ¡Bienvenido!\\";\\n } else {\\n greeting = \\"Welcome!\\";\\n }\\n res.status(200).json({ greeting });\\n}\\n\\n
In case you’re using the app
router, you can move the logic to this folder
\\napp/api/protected/route.js
:
export const runtime = \'edge\'; // Run this API route at the edge using edge function\\n\\nexport async function GET(request) {\\n // getting the slug using searchParam as app router is a server rendered component\\n const { searchParams } = new URL(request.url); \\n const language = searchParams.get(\'language\');\\n\\n // Personalization logic based on query parameter\\n let greeting;\\n switch (language) {\\n case \'en\':\\n greeting = \'Hello! Welcome!\';\\n break;\\n case \'fr\':\\n greeting = \'Bonjour! Bienvenue!\';\\n break;\\n case \'es\':\\n greeting = \'¡Hola! ¡Bienvenido!\';\\n break;\\n default:\\n greeting = \'Welcome!\';\\n }\\n\\n return new Response(JSON.stringify({ greeting }), {\\n status: 200,\\n headers: {\\n \'Content-Type\': \'application/json\',\\n },\\n });\\n}\\n\\n
The code above defines an Edge Function that can be used to personalize greetings for users based on their language preferences. The function gets the language from the request query parameters and then uses it to select the appropriate greeting. The function then returns the greeting as a JSON response.
\\nThe config
object specifies that the function will be executed at the edge. This means that the function will be executed close to the user, which can improve performance. Here’s our final result:
In this article, we discussed what middleware is, how it works, and its advantages and disadvantages. We also went into depth to understand its implications and shortcomings as well.
\\nWe talked briefly about Edge Functions, their use cases, and how to create a simple edge function ourselves. When talking about middleware, Edge Functions play a key role in deployment and creation of the user experience.
\\nWith its ability to solve basic problems like authentication and geolocation, middleware is an outstanding feature. With the help of Edge Functions, middleware can run much faster, like static web applications.
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nChrome has integrated AI-powered features to improve the debugging experience in web development. These features leverage the Google Gemini AI model to assist developers in debugging their code without stepping out of the browser environment.
\\nGemini offers many useful features, including an AI assistance panel, an intelligent coding companion, console insight that provides AI-generated explanations for console errors and warnings directly in the console, and an “Ask AI” feature that is now extended to the sources, network, and performance panels for deep insights to performance metrics and network requests.
\\nThis article aims to explore these features with practical examples, showing how developers can leverage them for tasks like performance analysis, UI debugging, and accessibility improvements.
\\nBefore moving forward with this tutorial, you should have:
\\nThe Gemini AI Assistant Panel supports chatting with the Gemini AI model directly in Chrome DevTools. The context of your chats with Gemini AI is based on the source code of the page visited.
\\nAI assistance is extended to the Element and Styles panels to solve styling and DOM tree challenges, the Network panel to debug network requests, the Sources panel for debugging and exploring the source code, and the Performance panel for debugging performance profiles.
\\nTo access the AI Assistant Panel, you need to download the latest Chrome browser version. Once your browser is installed, ensure the following:
\\nConsider our React demo app with two noticeable bugs, even though they don’t break the app:
\\nWe’ll use the AI Assistance Panel to fix these bugs.
\\nFor the card overlap issue, select the search input element and prompt the AI with the following:
\\nPlace input field on top of the cards on scroll\\n\\n
Gemini AI suggests adding z-index: 10
to the input field to fix this issue.
Because Tailwind CSS is used for styling, add z-10
to the input field as follows:
<div className=\\"fixed flex justify-center py-5 w-full\\">\\n <input\\n type=\\"text\\"\\n placeholder=\\"Search for a team\\"\\n value={searchTerm}\\n onChange={(e) => setSearchTerm(e.target.value)}\\n className=\\"p-2 z-10 outline-none bg-transparent border border-[rgba(0, 255, 241, 0.4)] w-full md:w-[600px] text-gray-400 rounded\\"\\n />\\n</div>\\n\\n
If you run the app now, you’ll notice that the bug is still not fixed. The good thing is that the suggestion from Gemini is accurate, but then we need to apply our understanding of CSS to completely resolve the issue.
\\nWith our understanding of CSS’s stacking context, the parent div‘s position being fixed makes it sit behind other components. So the z-10
should be in the parent component to make it sit above other components:
<div className=\\"fixed z-10 flex justify-center py-5 w-full\\">\\n <input\\n type=\\"text\\"\\n placeholder=\\"Search for a team\\"\\n value={searchTerm}\\n onChange={(e) => setSearchTerm(e.target.value)}\\n className=\\"p-2 outline-none bg-transparent border border-[rgba(0, 255, 241, 0.4)] w-full md:w-[600px] text-gray-400 rounded\\"\\n />\\n</div>\\n\\n
Run the app and you’ll notice that the bug is fixed:
\\nFor the search functionality bug, select the search input element and prompt the AI with the following:
\\nThe search functionality implemented in React doesn\'t work as expected. (nothing is displayed when user enters a club name in lowercase )\\n\\n
Gemini AI suggests possible causes and their corresponding solutions. Fortunately, applying the first possible cause and solution fixes the search functionality bug.
\\nHere is the cause of the bug:
\\nconst filterByName = (teams, searchTerm) => {\\n return teams.filter((team) =>\\n team.name.includes(searchTerm)\\n );\\n};\\n\\n
Here is the solution:
\\nconst filterByName = (teams, searchTerm) => {\\n return teams.filter((team) =>\\n team.name.toLowerCase().includes(searchTerm.toLowerCase()));\\n};\\n\\n
With this solution, our search functionality should work as expected:
\\nThe console insight is an AI-powered tool that deciphers and explains console errors by analyzing stack traces and context from the source. With new Gemini support in DevTools, console insight explains the error and suggests fixes you can apply immediately.
\\nCheck the browser console of the demo React app, and you’ll see the following error:
\\nindex.jsx:72 Warning: Each child in a list should have a unique \\"key\\" prop.\\n\\nCheck the render method of `Cards`. See https://reactjs.org/link/warning-keys for more information.\\n at div\\n at Cards (http://localhost:3000/static/js/bundle.js:317:86)\\n at App\\n[email protected]:72\\n\\n
Hover on the error message, and you’ll see a bulb icon on the right section of the error message. Click the bulb icon, then turn on Console insights in Settings to receive AI assistance for understanding and addressing console warnings and errors:
\\nNow, Gemini AI will explain the error and suggest a fix for the error:
\\nAdd a unique key prop to the card component, and the error will be resolved:
\\n{filteredTeams.map((team) => (\\n <div key={team.name} className=\\"card relative flex justify-center items-center rounded-lg transition-all duration-150 h-[250px] sm:h-[200px] w-[250px] sm:w-[200px]\\">\\n <div className=\\"card-content flex flex-col justify-center items-center gap-[16px] sm:gap-[26px] bg-[#13161c] rounded-lg transition-all duration-250 h-[calc(100%-2px)] w-[calc(100%-2px)]\\">\\n <img\\n className=\\"logo w-[40px] sm:w-[50px]\\"\\n src={`${getImage(team.logo)}`}\\n alt={team.name}\\n />\\n ....\\n </div>\\n </div>\\n))}\\n\\n
The network request debugging AI allows you to chat with Gemini AI with the context based on the selected network request list.
\\nRight-click any network request and click the Ask AI option to access the network request debugging AI:
\\nNow, you can chat with the Gemini AI assistant based on the context of the selected network request. You can also change the chat context by selecting another network request.
\\n\\nTo see the raw data used by Gemini AI to reply to our chat, click the Analyzing network data dropdown:
\\nA great use case for the network request debugging AI is to debug network issues such as timeouts (server takes too long to respond), DNS resolution problems (can’t resolve host), SSL/TLS errors (invalid certificate or HTTPS issues), and rate limiting or throttling (e.g., 429 Too Many Requests).
\\nThe initial load speed of our demo app is slow — let’s find out why. Select the request to tailwindcss.com
and prompt the AI with the following:
Why is this network request taking so long?\\n\\n
Gemini AI suggests possible causes and suggestions for improvement. Applying these suggestions will improve the initial load time of the app.
\\nThe source code AI assistant allows you to chat with Gemini AI with the context based on the selected files in the source panel. With this, you can have an overview of what each file does on your website.
\\nTo access the source code AI assistant:
\\nPrompt the AI assistant to get a response within the context of the selected file.
\\nA flame graph in Chrome DevTools visualizes how much CPU time your functions are using. It helps you quickly identify where your app is spending too much time on synchronous operations.
\\nThe performance AI assistant pinpoints performance issues and provides suggestions to resolve them based on the flame graph and recorded performance profiles.
To access the performance AI assistant:
\\nYou can enter a custom prompt or select the example prompts.
\\nNow prompt the AI with the following:
\\nIdentify performance issues in this call tree\\n\\n
The AI assistant analyzes the call tree for performance issues and provides suggestions to resolve them.
\\nThe AI assistant pinpoints performance issues and provides suggestions to resolve them based on the flame graph and recorded performance profiles. This can help identify slow components and functions in our applications.
\\nThe AI assistant can also adjust color contrast to meet WCAG AA/AAA accessibility requirements.
\\nNotice the poor color contrast on this page:
\\nLet’s use the AI assistant to fix this. First, inspect the section or element with poor color contrast. Click the color palette icon, then open the contrast ratio menu to view AI-suggested options for improving the contrast:
\\nClick the check box in each option to automatically fix the contrast issue.
\\nGoogle’s recent implementation of the Gemini AI model is a great tool for debugging. In this guide, we offered hands-on practice with the AI assistant panel, color contrast fixer, source code AI assistant, network request debugging AI, console insight, and AI for flame graph analysis. As these features continue to evolve, they‘ll redefine how we build and debug for the web, making development smarter and intuitive.
\\nKeep in mind that these features are experimental and may change or never be included in future Chrome builds.
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\n--env-file
flag\\n --watch
flag\\n AsyncLocalStorage
defaulting to AsyncContextFrame
\\n The Node.js team just released Node.js 24 with significant updates and new features. Over the years, Node.js has become known for its dependence on third-party libraries for executing most of its tasks, from TypeScript support to testing to environment variable handling. Node.js 24 ships with native features that improve developer experience, security, and reduce dependency overhead.
\\nYou can check out our general overview of the Node.js 24 release here.
\\nBut the subtlety of some of these new features is something I’m concerned many developers will miss. In this tutorial, we’ll explore 10 Node.js features you might not be using yet — but absolutely should be. The good news is that from October 2025 onward, Node.js 24 becomes a Long-Term Support (LTS) release. At that point, most of these features will be stable and ready for production use and long-term projects.
\\nI recommend using Node Version Manager (nvm) to install Node.js 24. We’ll switch between Node.js 24 and previous versions when comparing certain features.
\\nInstall nvm on Mac with the following command:
\\nbrew install nvm\\n\\n
To verify that nvm is installed, run the following command:
\\nnvm --version\\n\\n
To install Node.js 24, run the following:
\\nnvm install 24\\n\\n
Run the following command to use a specific node version:
\\nnvm use <version>\\n\\n// To use Node.js 24\\nnvm use 24\\n\\n
The first feature I want to explore is the Node.js built-in support for TypeScript.
\\nThe latest LTS, Node.js 22, didn’t offer the best experience here. If you tried to run a TypeScript file directly using the node
command like node index.ts
, you’d see a bunch of errors:
That’s because Node.js didn’t natively understand TypeScript syntax and types.
\\nHowever, with Node.js 24, you can enable runtime TypeScript support with the new built-in support for type stripping:
\\nNode.js 24 can execute .ts
files that contain only erasable TypeScript syntax. These are type annotations that don’t need to be converted into actual JavaScript.
For TypeScript features that require actual JavaScript code generation — enum
declarations, public/private
parameter properties in constructors, namespace
, and const enum
— the Node.js 24 compiler will fail with a runtime error using the node
command like node index.ts
:
This is because enum
isn’t just erased; it must be transformed into JavaScript.
To allow Node.js to handle these cases, add this flag:
\\nnode --experimental-transform-types index.ts\\n\\n
Now Node.js will generate the necessary JavaScript code for enum Role
.
For now, this feature focuses on skipping type checking and erasable syntax so that Node.js can execute TypeScript code faster and avoid generating source maps. This does not add full TypeScript support.
\\nIf you need full TypeScript support (e.g., for decorators, JSX, path aliases, or strict typing), you should still use the TypeScript compiler (tsc
).
Standard packages such as the fs
, path
, http
, https
modules now have synchronous, callback, and promise-based forms.
If you prefer promises and async/await
, use the promise-based APIs:
import * as fs from \'node:fs/promises\';\\nconst content = await fs.readFile(\'file.txt\', \'utf-8\');\\nconsole.log(content);\\n\\n
If you’re working with older code or prefer callbacks, use the callback and sync APIs:
\\nimport * as fs from \'node:fs\';\\nfs.readFile(\'file.txt\', \'utf-8\', (err, data) => {\\n if (err) throw err;\\n console.log(data);\\n});\\n\\n
Node.js also supports top-level await
for promise-based operations. You can use the await
keyword directly at the top level of your script in an ES module without wrapping it in an async function.
--env-file
flagAnother heavily dependent external library for Node.js apps is the .env
, a library for working with secrets stored in environmental variables. With the Node.js --env-file
feature flag, you can remove .env
from your app dependencies.
To see this in practice, create a .env
file with the following:
GITHUB_SECRET=MY_SECRET\\n\\n
Add the following to index.js
file:
console.log(process.env.GITHUB_SECRET);\\n\\n
Then set the --env-file
feature flag to .env
:
node --env-file=.env index.js\\n\\n
\\n
You can also access environment variables using the built-in
node:process
module as follows:
import { env } from \'node:process\';\\nconsole.log(env.GITHUB_SECRET);\\n\\n
Node.js 24 now supports ECMAScript modules as its official standard for reusable code. You can now use the ES6 import
and export
syntax to import or export your modules.
All you have to do is set the \\"type\\"
field in package.json
with a value \\"module\\"
, or use .mjs
file extension to explicitly run your code as ES modules.
Create Login.js
and add the following:
export const Login = (user) => {\\n return `${user} is logged in`;\\n}\\n\\n
Update index.js
with the following:
import {Login} from \\"./login.js\\"\\nconsole.log(Login(\\"John Doe\\"));\\n\\n
Add the following to your package.json
file:
{\\n \\"type\\": \\"module\\",\\n}\\n\\n
Run the script with node index.js
:
To avoid the following error, a file extension must be provided when using the
import
keyword:
You can use import
statements to load either ES modules or CommonJS in ES modules (files with .mjs
extension or \\"type\\": \\"module\\"
in package.json
).
Let’s say you have a CommonJS file:
\\n///login.cjs\\nmodule.exports = {\\n login(user) {\\n return `${user} is logged in`;\\n },\\n};\\n\\n
Now you can import the CommonJS file in your ES module as follows:
\\nimport action from \'./login.cjs\';\\n\\nconsole.log(action.login(\\"John Doe\\"));\\n\\n
Importing an ES module in a CommonJS module is limited to using the dynamic import()
syntax to load ES modules:
// index.cjs\\n(async () => {\\n const action = await import(\'./login.mjs\');\\n console.log(action.Login(\\"John Doe\\")); \\n})();\\n\\n
The import()
syntax returns a promise, which can be used in CommonJS to dynamically load ES modules.
Node.js has greatly improved interoperability between ES and the CommonJS module system.
\\n\\nGoodbye to Mocha, Chai, Jest, and other third-party test runners. Node.js now ships with built-in test runners, which means you can run tests with just Node without relying on third-party libraries.
\\nThese test runners live in the node:test
module and support describe
, it
, and test
blocks. You can also implement assertions using node:assert
module and implement test coverage reporting.
Let’s see this in practice, testing utility functions.
\\nCreate a file called utils.ts
:
export function isEmailValid(email: string): boolean {\\n const regex = /^[^\\\\s@]+@[^\\\\s@]+\\\\.[^\\\\s@]+$/;\\n return regex.test(email);\\n}\\nexport function truncateText(text: string, maxLength: number): string {\\n if (text.length <= maxLength) return text;\\n return text.slice(0, maxLength) + \'...\';\\n}\\n\\n
The function isEmailValid
checks if an input string is a valid email address using a regular expression. truncateText
shortens a string to a specified maximum length and adds an ellipsis (...
) at the end if the text exceeds that length.
Now, create a test file utils.test.js
with the following:
import { describe, it } from \'node:test\';\\nimport assert from \'node:assert\';\\nimport { isEmailValid, truncateText } from \'./utils.ts\';\\n\\ndescribe(\'isEmailValid function\', () => {\\n it(\'returns true for a valid email\', () => {\\n assert.ok(isEmailValid(\'[email protected]\'));\\n });\\n\\n it(\'returns false for an invalid email\', () => {\\n assert.ok(!isEmailValid(\'invalid-email\'));\\n });\\n\\n it(\'returns false for empty string\', () => {\\n assert.ok(!isEmailValid(\'\'));\\n });\\n});\\n\\ndescribe(\'truncateText function\', () => {\\n it(\'returns original text if shorter than maxLength\', () => {\\n assert.strictEqual(truncateText(\'Hello\', 10), \'Hello\');\\n });\\n\\n it(\'truncates and adds ellipsis if text is longer than maxLength\', () => {\\n assert.strictEqual(truncateText(\'This is a long text\', 7), \'This is...\');\\n });\\n\\n it(\'works with empty string\', () => {\\n assert.strictEqual(truncateText(\'\', 5), \'\');\\n });\\n});\\n\\n
Here is how you use Node.js’s native testing feature to test the utility functions. Like the other popular test libraries, the describe
blocks group related tests for each function, while assert
confirms the validity of the return values.
The isEmailValid
tests confirm that the function correctly returns true
for a valid email format, and false
for invalid inputs like a malformed email or an empty string. The truncateText
tests check that if the input text is shorter than or equal to the specified maximum length, it is returned as-is. Otherwise, it is truncated and appended with an ellipsis (...
).
Now, run the following command to execute the tests:
\\nnode --test\\n\\n
You can also get test coverage by running:
\\nnode --test --experimental-test-coverage\\n\\n
--watch
flagIf you’ve worked with frontend libraries like Vue.js, React, and Angular, you’d be familiar with the hot module reload feature, which allows you to run your code on every code change.
\\nIn previous Node.js versions, we achieved similar functionality using a third-party library called nodemon. But now, using the --watch
flag, you can re-run your code on every code change.
With the following command, you can watch and re-run your tests on every code change:
\\nnode --watch --test\\n\\n
With this feature, you don’t need to depend on third-party libraries like nodemon for reloading your app on code changes.
\\nAsyncLocalStorage
defaulting to AsyncContextFrame
In earlier Node.js versions, the AsyncLocalStorage
class used to maintain context across asynchronous operations (like keeping track of user sessions) was prone to bugs. This included cases where the session data returned undefined
or the wrong user ID due to lost asynchronous context.
Node.js 24 improves AsyncLocalStorage
performance by switching its internal implementation to use AsyncContextFrame
.
Here is a practical application of AsyncLocalStorage
to track user sessions across asynchronous HTTP requests in Node.js:
import http from \'node:http\';\\nimport { AsyncLocalStorage } from \'node:async_hooks\';\\nimport { randomUUID } from \'node:crypto\';\\n\\nconst asyncLocalStorage = new AsyncLocalStorage();\\n\\nfunction logWithRequestId(message: string) {\\n const requestId = asyncLocalStorage.getStore();\\n console.log(`[${requestId}] ${message}`);\\n}\\n\\nconst server = http.createServer((req, res) => {\\n const requestId = randomUUID();\\n\\n asyncLocalStorage.run(requestId, () => {\\n logWithRequestId(\'Request received\');\\n\\n setTimeout(() => {\\n logWithRequestId(\'Processed after timeout\');\\n res.writeHead(200, { \'Content-Type\': \'text/plain\' });\\n res.end(`Hello! Your request ID is ${requestId}\\\\n`);\\n }, 100);\\n });\\n});\\n\\nserver.listen(3000, () => {\\n console.log(\'Server is running on http://localhost:3000\');\\n});\\n\\n
Here, each request is assigned a unique requestId
, and we maintain that ID throughout asynchronous operations like setTimeout
.
Switching AsyncLocalStorage
internal implementation to use AsyncContextFrame
ensures each async call maintains its isolated context. This way, logs are always correct, even in deeply nested or delayed async operations.
Visiting http://localhost:3000/
you should show the following:
Node.js now supports SQLite databases with its built-in node:sqlite
module. This means your Node.js app no longer has to depend on external database libraries like better-sqlite3
or sqlite3
for interacting with a SQLite database.
node:sqlite
module is lightweight and simplifies deployment by eliminating the need for additional database setup.
This feature is still experimental and can be accessed by running your app with the --experimental-sqlite
flag
Let’s look at a practical example that demonstrates how to use the built-in node:sqlite
module:
import { DatabaseSync } from \'node:sqlite\';\\nconst database = new DatabaseSync(\':memory:\');\\n\\n// Execute SQL statements.\\ndatabase.exec(`\\n CREATE TABLE data(\\n key INTEGER PRIMARY KEY,\\n value TEXT\\n ) STRICT\\n`);\\n// Create a prepared statement to insert data into the database.\\nconst insert = database.prepare(\'INSERT INTO data (key, value) VALUES (?, ?)\');\\n// Execute the prepared statement with bound values.\\ninsert.run(1, \'hello\');\\ninsert.run(2, \'world\');\\n// Create a prepared statement to read data from the database.\\nconst query = database.prepare(\'SELECT * FROM data ORDER BY key\');\\n// Execute the prepared statement and log the result set.\\nconsole.log(query.all());\\n// Prints: [ { key: 1, value: \'hello\' }, { key: 2, value: \'world\' } ]\\n\\n
In Node.js v22.5.0 and later, you can use the built-in SQLite support to work with databases either stored in a file or in memory. To use a file-backed database, provide a file path, and for an in-memory database, use the special path \':memory:\'
. The database.close()
method safely closes the database connection, throwing an error if the database isn’t open.
To execute one or more SQL statements without expecting results (e.g., from a SQL file), you can use database.exec(sql)
, which wraps sqlite3_exec()
.
Starting in Node.js v24.0.0, you can also register custom aggregate functions using database.aggregate(name, options)
and check if a transaction is active with the boolean property database.isTransaction
.
Node.js ships with a built-in debugger that allows you to easily debug Node.js applications using the Chrome DevTools debugger.
\\nTo enable this, run your Node.js app with the --inspect
flag like so:
node --inspect index.js\\n\\n
However, this might cause your app to start and finish execution before you can attach the debugger. To fix that, use --inspect-brk
instead:
node --inspect-brk index.js\\n\\n
This tells Node.js to pause execution on the first line, giving you time to connect the debugger.
\\nYou’ll see output like:
\\nOpen Chrome and go to: chrome://inspect
, then click on “Open dedicated DevTools for Node”:
Now you can debug your app, step through your code line by line, set breakpoints, and use the Console tab to evaluate expressions.
\\nAnother heavily dependent external library for Node.js apps is the ws or Socket.IO for client connections to real-time data feeds or interacting with other WebSocket servers. With the Node.js native new WebSocket
constructor, you can remove ws or Socket.IO from your app dependencies if your app depends on its WebSocket client implementation.
Let’s look at a practical example that demonstrates how to use the built-in WebSocket client:
\\nconst socket = new WebSocket(\'ws://localhost:8080\');\\nsocket.addEventListener(\'open\', event => {\\n console.log(\'WebSocket connection established!\');\\n // Sends a message to the WebSocket server.\\n socket.send(\'Hello Server!\');\\n});\\n\\nsocket.addEventListener(\'message\', event => {\\n console.log(\'Message from server: \', event.data);\\n});\\n\\nsocket.addEventListener(\'close\', event => {\\n console.log(\'WebSocket connection closed:\', event.code, event.reason);\\n});\\n\\nsocket.addEventListener(\'error\', error => {\\n console.error(\'WebSocket error:\', error);\\n});\\n\\n
If you run a WebSocket server on port 8080
and execute this client code, you’ll see a series of events logged in the console. Initially, a message confirming the WebSocket connection will appear.
Then, the client sends a greeting message (\\"Hello Server!\\"
) to the server. Any response sent back from the server will also be displayed in the console. Also, if the server closes the connection or an error occurs during communication, those events will be logged with details such as the close code, reason, or error message.
This is just a glimpse of what’s possible with Node.js and its growing list of new features.
\\nHowever, features like the built-in TypeScript support should be viewed as partial support rather than a full replacement for tools like ts-node
or build systems such as esbuild
or tsc
.
This is especially true for larger codebases that rely on advanced TypeScript configurations. Similarly, the node:sqlite
module doesn’t replace the need for full-featured, production-grade databases like PostgreSQL or MySQL, as it’s best suited for lightweight applications or prototyping.
Hey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nIn the old days, CSS animations were handled by a handful of magicians and tricksters, often referred to as CSS Wizards. These wizards knew the ways of CSS and how to make elements go up and down, disappear, fly out, glow, jump, and hide. Their tale was spoken in all known realms, and their arcane magic was feared by many – and sought after by even more.
\\nAs their fame grew, so did the number of apprentices – people who also wanted to use those arcane spells that made a page transition so smoothly, or gave pleasing loader screens. But the magic was difficult, it took time to master, and when it went bad, it went bad – apps crashed, pages jumped into each other, users couldn’t figure out what was what, and why it was so. These were the difficult times.
\\nFortunately, as time passed, magicians, apprentices, and new wizards slowly but surely collected their own knowledge, classified it, and put it into large libraries – collections of spells that would bring about powerful wizardry even in the hands of a mere apprentice. These are today referred to as CSS animation libraries.
\\nIn this article, we will go over six of these powerful libraries to help wizards and apprentices from all over the globe get their spells up and running quickly, without getting lost deep in scrolls (a.k.a documentation) and making devastating errors.
\\nOur collection tries to present you with the best, easiest, most configurable, and convenient CSS animation libraries out there.
\\nHere are the CSS animation libraries we will cover in this article:
\\nName | \\nBest Use Case | \\nCompatibility | \\n
---|---|---|
Animista | \\nIdeal for small projects or when developers want to quickly animate elements without adding new npm packages | \\nCompatible with almost any JavaScript framework due to its use of native CSS and keyframes | \\n
Animate CSS | \\nSuitable for small and large animation-heavy projects due to its high customizability and flexible implementation (npm or CDN) | \\nWorks with practically any framework, requiring minor adjustments | \\n
AnimXYZ | \\nIdeal for both large and small projects due to its extensive customization capabilities and straightforward use | \\nOffers direct support for React and Vue, and can be used with any other framework | \\n
Whirl | \\nBest for creating seamless and beautiful loading animations to enhance user experience | \\nCompatible with pretty much any JS framework as it uses vanilla CSS, requiring minor adjustments for framework-specific quirks | \\n
Moving Letters | \\nBest suited for animating text | \\nEasily portable to any JS framework | \\n
LDRS | \\nBest suited for loaders and spinner animations | \\nComes with ReactJS support and can be implemented with pretty much any JavaScript framework | \\n
Animista is a handy and easy-to-use on-demand CSS animations library. The library provides ready-made animations for various parts of the app development workflow. No installation is required.
\\nIn order to implement a particular animation, all you need to do is visit its playground, choose the animation you want to implement, and get the relevant class and keyframes for it. All its content is in native CSS: no external dependencies required.
\\nAvailable animations are grouped in different sections: Basic, Entrances, Exits, Text, Attention, and Background.
\\nEach section contains animations corresponding to its group. For instance, in the Entrances section, you’ll find animations like Bounce-in or Roll-in that’ll animate the element coming into the page, while the Attention section will present animations that create a jello or wobble effect.
\\nLet’s say we want to use a rotate-in animation in the Entrances section. We have to get the code from the respective section on the right (pointed with red arrow):
\\nFor this example, we’ll be using a React app created with Vite, but you’re free to choose your own setup. First, we need to get the code given by Animista and paste it into our CSS file:
\\n.rotate-in-center {\\n -webkit-animation: rotate-in-center 0.6s cubic-bezier(0.25, 0.46, 0.45, 0.94)\\n both;\\n animation: rotate-in-center 0.6s cubic-bezier(0.25, 0.46, 0.45, 0.94) both;\\n}\\n\\n@-webkit-keyframes rotate-in-center {\\n 0% {\\n -webkit-transform: rotate(-360deg);\\n transform: rotate(-360deg);\\n opacity: 0;\\n }\\n 100% {\\n -webkit-transform: rotate(0);\\n transform: rotate(0);\\n opacity: 1;\\n }\\n}\\n@keyframes rotate-in-center {\\n 0% {\\n -webkit-transform: rotate(-360deg);\\n transform: rotate(-360deg);\\n opacity: 0;\\n }\\n 100% {\\n -webkit-transform: rotate(0);\\n transform: rotate(0);\\n opacity: 1;\\n }\\n}\\n\\n
As you can see, we have only one class and the appropriate keyframes for it. Since we’re dealing with a simple class, if we wanted to create a basic React app that would animate an element on button click, we could do something like the following:
\\nimport { useState } from \\"react\\";\\nimport \\"./App.css\\";\\n\\nfunction App() {\\n const [animate, setAnimate] = useState(false);\\n\\n const handleClick = () => {\\n setAnimate(true);\\n setTimeout(() => setAnimate(false), 1000); // match animation duration\\n };\\n\\n return (\\n <>\\n <h2 className={animate ? \\"rotate-in-center\\" : \\"\\"}>Animate me</h2>\\n <button onClick={handleClick}>Animate</button>\\n </>\\n );\\n}\\n\\nexport default App;\\n\\n
With its variety of animation capabilities, Animista can be used in many different applications. The lack of installation and ability to use it with native CSS opens up the possibility for developers working on smaller projects who want to get things up and running quickly. It also applies for devs with bigger codebases who do not want to add yet another npm package, but still want to immediately animate an element.
\\nSince Animista uses native CSS and keyframes, it can be used with almost any JavaScript framework.
\\nAnimate CSS is a highly customizable CSS animation library. It can be used as an npm package, or accessed directly via CDN. On the main page of the Animate CSS website, we have both the documentation and on the right, the animation classes:
\\nAdding an animation using Animate CSS is as easy as:
animate__animated
command as a base to the element you want to animateanimate__bounce
Consider that we’re going to use Animate CSS on a ReactJS app built with Vite. After installing the npm package, we need to add import \'animate.css\';
on main.jsx
.
Now we can just start using the animations on our elements. Add the following code and run the app:
\\nimport { useState } from \\"react\\";\\nimport \\"./App.css\\";\\n\\nfunction App() {\\n const [animate, setAnimate] = useState(false);\\n\\n const handleClick = () => {\\n setAnimate(true);\\n setTimeout(() => setAnimate(false), 1000); // match animation duration\\n };\\n\\n return (\\n <>\\n <h2 className={animate ? \\"animate__animated animate__bounce\\" : \\"\\"}>\\n Animate me\\n </h2>\\n <button onClick={handleClick}>Animate</button>\\n </>\\n );\\n}\\n\\nexport default App;\\n\\n
You’ll see that on button click, the desired animation is fired:
Animate CSS can be employed by both small and larger projects. Since it supports both package installation and CDN, it gives freedom to the dev team on how they’d like to implement the library. Animate CSS allows a lot of configuration and customization, which makes it suitable for animation-heavy projects.
\\nAnimate CSS can be used with practically any framework. In the documentation, you’ll see class
is used. But considering that we used a React app in our example, you’ll notice that we changed class
to className
. As long as we’re taking such things into account, Animate CSS should work with any popular framework out there.
AnimXYZ is a highly customizable and composable CSS animation framework. It also has direct support for major JavaScript frameworks like React and Vue. They have a great website showcasing the library’s capabilities and an animated documentation section:
As you’d expect from an animation library, it allows developers to create many different animation sequences. AnimXYZ leverages CSS variables to customize a base @keyframes
animation. These xyz
variables control both the timing and animated properties like opacity
and transform
. Depending on whether an element enters or exits, these values animate accordingly between defined states.
AnimXYZ has a specified installation process for React and Vue. We’ll use a React project built with Vite and will follow the standard installation for this section to experiment with AnimXYZ in the most generic way.
\\nSo, after installing the core package via npm, we will add import \\"@animxyz/core\\";
to main.jsx
and then, paste the following in: App.jsx
:
import { useState } from \\"react\\";\\nimport \\"./App.css\\";\\n\\nfunction App() {\\n const [key, setKey] = useState(0);\\n\\n const handleClick = () => {\\n setKey((prev) => prev + 1); // trigger re-render with new key\\n };\\n\\n return (\\n <>\\n <button onClick={handleClick}>Animate</button>\\n <div key={key} className=\\"xyz-in\\" xyz=\\"fade up big\\">\\n I will animate in!\\n </div>\\n </>\\n );\\n}\\n\\nexport default App;\\n\\n
AnimXYZ is suited for both worlds: large and small projects alike.
\\nFor large projects, it has extensive customization that enables developers to create pretty much any animation they can imagine. It’s also suitable for small projects because the library itself is fairly straightforward to use, so there’s not a steep learning curve. You or your team can get started with AnimXYZ really quickly.
\\nAnimXYZ offers support for two popular frameworks directly: React and Vue. But it can be used with pretty much any other framework, since it has a core package that works with vanilla JavaScript. As long as the dev team knows JavaScript and its ecosystem, they can effectively use AnimXYZ as a CSS animation library in their projects.
\\nWhirl is a simple library that offers a collection of loading animations. Every little touch is important for the user experience, and adding a well-thought-out, aesthetically pleasing loading animation to a web app greatly enhances the perception of the app’s quality and the team’s seriousness.
\\nI’ve always enjoyed a nice loading animation, and Whirl has an extensive collection of beautifully designed, fully functional loading animations ready to use.
\\nHere’s what the main page of the library looks like:
You’ll see a collection of loading animations. You can look around and try different animations. The Lucky Dip! button gives a random animation.
\\n\\nNo installation is required; everything is done with CSS directly, and if you click on Grab the CSS on Github! link, it’ll provide you with the required code.
\\nSo, let’s say we create a React app with Vite. The only thing we need to do is to copy the code given:
Then you’ll paste it into the App.css file. From here, we can just create a simple component with the className rainbow
, and enjoy our loading animation:
import \\"./App.css\\";\\n\\nfunction App() {\\n return (\\n <>\\n <div className=\\"rainbow\\"></div>\\n </>\\n );\\n}\\n\\nexport default App;\\n\\n
Whirl is a small library that targets one niche: loading animations. It is not intended as a full library for complex animations. It does one thing, and it does it well. It’s best for teams or devs who want a better UX by creating seamless, beautiful loading animations.
\\nSince Whirl uses vanilla CSS, it should be compatible with pretty much any JS framework. One thing to note is that, for instance, in our example, we had to take into account that React doesn’t work with the class
keyword, but requires className
to be used. You’ll need to account for these little quirks in the framework of your choice, and you’ll be good to go.
Moving Letters is a CSS animations library that focuses on text animations. It uses Anime JS in the background. In principle, you can use the animations in any other element you wish, but it works best with text animations.
\\nNo installation is required. For this section, we will use vanilla JS, so go ahead and set up a Vanilla JS project using Vite.
\\nOn the webpage, we can see and choose what kind of animation we want to use:
Once you click, the animation and the code next to it will be presented:
The only thing we need to do is to use this code: no npm packages required. Notice that we’re importing the Anime JS library.
\\nIf implemented correctly, this will be the end result:
Moving Letters is best suited for animating text. You can use it for different purposes, but that would require some tweaking. It’s better to use the tool for what it’s specifically designed for.
\\nMoving Letters gives its examples on vanilla JS, but it can easily be ported to any JS framework. At the core, we’re just using vanilla JS and CSS. So, compatibility, while it is not natively supported, shouldn’t be that difficult.
\\nLDRS is a great CSS animations library for loaders and spinners. It can be used by installing an npm package or directly with CDN. It has a nice collection of simple yet elegant spinners and loaders, and offers direct support for HTML/CSS and React.
\\nOn the main page, you can choose the loader you want to use:
When clicked, a separate component opens which shows the chosen loader, its source code, and some customization on the side:
You can change properties like size, speed, and background, and copy the code directly.
\\nAfter creating a React-vite application, this is the App.jsx we want to have:
\\nimport React from \\"react\\";\\nimport { Hourglass } from \\"ldrs/react\\";\\nimport \\"ldrs/react/Hourglass.css\\";\\nimport \\"./App.css\\";\\n\\nconst App = () => {\\n return (\\n <div>\\n {/* // Default values shown */}\\n <Hourglass size=\\"40\\" bgOpacity=\\"0.1\\" speed=\\"1.75\\" color=\\"blue\\" />\\n </div>\\n );\\n};\\n\\nexport default App;\\n\\n
This will render the following page:
LDRS is best suited for loaders and spinner animations.
\\nLDRS comes with ReactJS support, but it can be implemented with pretty much any JavaScript framework.
\\nOur journey through the modern landscape of CSS animation libraries reveals that the “arcane magic” of web motion is more accessible than ever. We’ve moved beyond the days when complex animations were the exclusive domain of CSS wizards.
\\nAs demonstrated, libraries like Animista, Animate CSS, AnimXYZ, Whirl, Moving Letters, and LDRS offer powerful, pre-built solutions to common animation challenges.
\\nEach library streamlines the process of adding engaging microinteractions, smooth transitions, informative loading states, and eye-catching text effects, ultimately enhancing user experience and perceived performance. While their implementation methods and customization options differ, they all share the goal of empowering developers to bring interfaces to life more efficiently.
\\nThe true power lies in choosing the right tool for the job. Consider the scope of your project, your team’s familiarity with different approaches (CSS vs. JS integration), and the specific types of animations you need.
\\nDon’t hesitate to experiment! The best way to master this modern magic is to try these libraries yourself. Dive into their documentation, play with the examples, and see how easily you can elevate your next web project with dynamic and delightful animations.
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nastro:db
)\\n Handling media uploads securely and efficiently is something developers can’t afford to get wrong, especially if their website relies heavily on user-generated content or dynamic media delivery. In Astro, this can be achieved through integration with a headless Digital Asset Manager (DAM) like Cloudinary.
\\nA DAM provides a centralized way to manage media assets like images, videos, and other rich media. With headless DAMs like Cloudinary, you can store, transform, and deliver media through APIs or SDKs, making them a good fit for Astro’s component-based architecture.
\\nIn this article, you’ll learn how to build a secure file upload system in Astro using Cloudinary’s SDKs and native integration. We’ll cover how to handle uploads safely using Astro’s SSR mode while maintaining Astro’s static-first advantages, and display media using Astro components.
\\nWe will build a product showcase portal where users can create products and upload media files (video and image) for each product, view the list of products, and delete media uploads.
\\nThis is what the final application will look like:
\\nHere is the GitHub repo for the final build.
\\nRun the following command in your terminal to scaffold an Astro project:
\\nnpm create astro@latest\\n\\n
Then, choose the basic template.
\\nNext, update the project’s package.json
file with the following dependencies:
{\\n \\"dependencies\\": {\\n \\"@astrojs/db\\": \\"^0.14.11\\",\\n \\"@astrojs/netlify\\": \\"^6.2.6\\",\\n \\"@astrojs/react\\": \\"^4.2.4\\",\\n \\"@tailwindcss/vite\\": \\"^4.1.4\\",\\n \\"@types/react\\": \\"^19.1.2\\",\\n \\"@types/react-dom\\": \\"^19.1.2\\",\\n \\"astro\\": \\"^5.6.2\\",\\n \\"react\\": \\"^19.1.0\\",\\n \\"react-dom\\": \\"^19.1.0\\",\\n \\"tailwindcss\\": \\"^4.1.4\\",\\n \\"swiper\\": \\"^11.2.6\\",\\n \\"uuid\\": \\"^11.1.0\\"\\n }\\n}\\n
@astrojs/tailwind
: For utility-first CSS styling with Tailwind@astrojs/db
: A local-first ORM/ SQL database layer for database interactions, defining schema, and seeding data@astrojs/react
: To enable the use of React components within the Astro application@astrojs/netlify
: Integration for deploying Astro projects on Netlifyuuid
: Used to generate universally unique IDsastro:db
)During development, Astro uses your database configuration to automatically generate local TypeScript types and autocompletion based on your defined schemas each time the dev server is started. We’ll configure and use Astro DB for the app database. Let’s begin by defining the database tables and their relationships.
\\nCreate a db/config.ts
file at the root of your project, where you will define a schema for the database tables and their relationships. Then add the following:
import { column, defineDb, defineTable } from \\"astro:db\\";\\nconst Product = defineTable({\\n columns: {\\n id: column.text({ primaryKey: true }),\\n description: column.text(),\\n price: column.number(),\\n brand: column.text(),\\n slug: column.text({ unique: true }),\\n stock: column.number(),\\n tags: column.text(),\\n name: column.text(),\\n type: column.text(),\\n },\\n});\\nconst ProductMedia = defineTable({\\n columns: {\\n id: column.text({ primaryKey: true }),\\n productId: column.text({ references: () => Product.columns.id }),\\n media: column.text(),\\n media_type: column.text(),\\n },\\n});\\n\\nexport default defineDb({\\n tables: {\\n Product,\\n ProductMedia,\\n },\\n});\\n\\n
Here, we’ve defined a database schema using Astro DB with two tables: Product
and ProductMedia
. The Product
table stores details about individual products, the ProductMedia
table stores the media type and the media URL or identifier.
The ProductMedia
table is linked to the Product
table via the productId
field, establishing a relationship where each media item (like an image or video) is associated with a specific product.
Next, update astro.config.mjs
as follows:
import { defineConfig } from \'astro/config\';\\nimport db from \'@astrojs/db\';\\n\\nexport default defineConfig({\\n integrations: [db()],\\n});\\n\\n
To seed the database with initial data, create a seed-data.ts
file in the db
folder with the following:
interface SeedVehicle {\\n description: string;\\n media: string[];\\n media_type: string;\\n stock: number;\\n price: number;\\n brand: string;\\n slug: string;\\n name: string;\\n type: VehicleTypes;\\n tags: string[];\\n}\\ntype VehicleTypes = \'COUPE\' | \'SEDAN\' | \'SPORTS CAR\' | \'CONVERTIBLE\' | \'TRUCK\' | \'STATION WAGON\';\\nexport const seedVehicles: SeedVehicle[] = [\\n {\\n description:\\n \'Sleek burgundy luxury car with multi-spoke rims in a minimalist beige and brown indoor setting, exuding elegance and modern design.\',\\n media: [\'sample-video.mp4\', \'sample-video.mp4\'],\\n media_type: \'video\',\\n stock: 7,\\n price: 750,\\n brand: \'Tesla\',\\n slug: \'luxury_burgundy_car\',\\n name: \'Luxury Burgundy Car\',\\n type: \'COUPE\',\\n tags: [\'sleek vehicle\', \'luxury car\', \'modern design\']\\n },\\n {\\n description:\\n \'Sleek black SUV with futuristic design parked in front of a modern building with warm lighting and glass panels.\',\\n media: [\'luxury_suv_1.jpeg\', \'luxury_suv_2.jpeg\'],\\n media_type: \'image\',\\n stock: 3,\\n price: 900,\\n brand: \'Tesla\',\\n slug: \'range_rover_luxury_suv\',\\n name: \'Range Rover Luxury SUV\',\\n type: \'COUPE\',\\n tags: [\'SUV\', \'luxury car\', \'modern design\']\\n },\\n {\\n description:\\n \'Front view of a vibrant orange sports car with sharp LED headlights, bold grille, and dramatic lighting in a dark setting.\',\\n media: [\'nissan_sport_1.jpeg\', \'nissan_sport_2.jpeg\'],\\n media_type: \'image\',\\n stock: 6,\\n price: 1200,\\n brand: \'Nissan\',\\n slug: \'nissan_sport_car\',\\n name: \'Nissan Sport Car\',\\n type: \'SPORTS CAR\',\\n tags: [\'aerodynamics\', \'sports\', \'speed\']\\n },\\n]\\n\\n
This code defines a SeedVehicle
TypeScript interface of a single vehicle object and a list of vehicle objects used to seed data into an application. The VehicleTypes
union type defines a limited set of allowed vehicle types.
Next, create a seed.ts
file in the db
folder with the following:
import { db,Product, ProductMedia } from \\"astro:db\\";\\nimport { v4 as UUID } from \\"uuid\\";\\nimport { seedVehicles } from \\"./seed-data\\";\\n\\nexport default async function seed() {\\n const queries: any = [];\\n seedVehicles.forEach((p) => {\\n const product = {\\n id: UUID(),\\n description: p.description,\\n price: p.price,\\n brand: p.brand,\\n slug: p.slug,\\n stock: p.stock,\\n tags: p.tags.join(\\",\\"),\\n name: p.name,\\n type: p.type,\\n };\\n queries.push(db.insert(Product).values(product));\\n p.media.forEach((content) => {\\n const media = {\\n id: UUID(),\\n media: content,\\n productId: product.id,\\n media_type: p.media_type\\n };\\n queries.push(db.insert(ProductMedia).values(media));\\n });\\n });\\n db.batch(queries);\\n}\\n\\n
This populates the database with the seed data once the dev server starts. It iterates through seedVehicles
from db/seed-data.ts
to create Product
and associated ProductMedia
and uses db.batch()
for efficient insertion of multiple product and media records.
To use React and Tailwind in the Astro project, add the following to astro.config.mjs
:
import { defineConfig } from \'astro/config\';\\nimport react from \\"@astrojs/react\\";\\nimport tailwindcss from \\"@tailwindcss/vite\\";\\nexport default defineConfig({\\n integrations: [react()],\\n output: \\"server\\", \\n vite: {\\n plugins: [tailwindcss()]\\n }\\n});\\n\\n
Next, update the tsconfig.json
file as follows:
{\\n \\"extends\\": \\"astro/tsconfigs/strict\\",\\n \\"compilerOptions\\": {\\n \\"baseUrl\\": \\".\\",\\n \\"paths\\": {\\n \\"@/*\\": [\\n \\"src/*\\"\\n ]\\n },\\n \\"jsx\\": \\"react-jsx\\",\\n \\"jsxImportSource\\": \\"react\\"\\n }\\n}\\n\\n
This config enables strict TypeScript settings with React JSX support and a cleaner import alias for the src
directory.
Next, create styles/global.css
in the asset
folder and add the following:
@import \\"tailwindcss\\";\\n\\n
To enable SSR in the Astro project, add the following to astro.config.mjs
:
import { defineConfig } from \'astro/config\';\\nimport netlify from \\"@astrojs/netlify\\";\\nexport default defineConfig({\\n output: \\"server\\", \\n adapter: netlify(),\\n});\\n\\n
The netlify
adapter allows the server to render any page on demand when a route is visited.
Astro supports creating components with Svelte, Vue, React, SolidJS, and Preact. It’s also framework agnostic, meaning developers can choose and combine different frameworks and libraries for their projects. For this tutorial, we’ll combine React and Astro to create components.
\\nCreate shared/Navbar.astro
in the components
folder and add the following:
---\\n---\\n<!-- component --\x3e\\n<nav\\n class=\\"flex justify-between px-20 py-10 items-center fixed top-0 w-full z-10 h-20\\"\\n style=\\"background-color: #000000;\\"\\n>\\n <h1 class=\\"text-xl text-white font-bold\\">\\n <a href=\\"/\\">AutoRentals</a>\\n </h1>\\n <div class=\\"flex items-center\\">\\n <ul class=\\"flex items-center space-x-6\\">\\n <li class=\\"font-semibold text-white\\">\\n <a href=\\"/dashboard\\">Dashboard</a>\\n </li>\\n </ul>\\n </div>\\n</nav>\\n\\n
Create ProductSlideShow.astro
in the components
folder and add the following:
---\\nimport \\"swiper/css\\";\\nimport \\"swiper/css/pagination\\";\\ninterface Props {\\n media: string[];\\n media_type: string;\\n product_name: string;\\n}\\nconst { media, media_type, product_name } = Astro.props;\\nconst fullMedia = media.map((mediaURL) => {\\n return mediaURL.startsWith(\\"http\\")\\n ? mediaURL\\n : `${import.meta.env.PUBLIC_URL}/media/vehicles/${mediaURL}`;\\n});\\n---\\n<div class=\\"swiper mt-10 col-span-1 sm:col-span-2\\">\\n <!-- Additional required wrapper --\x3e\\n <div class=\\"swiper-wrapper\\">\\n <!-- Slides --\x3e\\n {\\n fullMedia.map((mediaURL) => (\\n <div class=\\"swiper-slide\\">\\n {media_type === \\"video\\" ? (\\n <video class=\\"w-full h-full object-cover px-10\\" autoplay loop muted>\\n <source src={mediaURL} type=\\"video/mp4\\" />\\n Your browser does not support the video tag.\\n </video>\\n ) : (\\n <img\\n src={mediaURL}\\n alt={product_name}\\n class=\\"w-full h-full object-cover px-10\\"\\n />\\n )}\\n </div>\\n ))\\n }\\n </div>\\n <div class=\\"swiper-pagination\\"></div>\\n</div>\\n<style>\\n .swiper {\\n width: 100%;\\n height: 600px;\\n }\\n</style>\\n<script>\\n import Swiper from \\"swiper\\";\\n import { Pagination } from \\"swiper/modules\\";\\n document.addEventListener(\\"astro:page-load\\", () => {\\n const swiper = new Swiper(\\".swiper\\", {\\n pagination: {\\n el: \\".swiper-pagination\\",\\n },\\n modules: [Pagination],\\n });\\n });\\n</script>\\n\\n
This component displays a product media slideshow using Swiper.js. If a URL doesn’t start with \\"http\\"
, it prepends a local path using import.meta.env.PUBLIC_URL
. Depending on the media type, each slide displays either an <img>
or <video>
element.
For more tutorials using Swiper.js, check out the following guides:
\\n\\nLayouts are Astro components that provide a reusable UI structure for sharing UI elements like navigation bars, menus, and footers across multiple pages.
\\nCreate MainLayout.astro
in the layouts
folder and add the following:
---\\nimport Navbar from \\"@/components/shared/Navbar.astro\\";\\nimport \\"@/assets/styles/global.css\\";\\nimport { ClientRouter } from \\"astro:transitions\\";\\ninterface Props {\\n title?: string;\\n description?: string;\\n image?: string;\\n}\\nconst {\\n title = \\"AutoRentals\\",\\n description = \\"One stop shop for all your vehicle rentals\\",\\n image = \\"/vehicles/images/no-image.png\\",\\n} = Astro.props;\\n---\\n<html lang=\\"es\\">\\n <head>\\n <meta charset=\\"utf-8\\" />\\n <link rel=\\"icon\\" type=\\"image/svg+xml\\" href=\\"/favicon.svg\\" />\\n <meta name=\\"viewport\\" content=\\"width=device-width\\" />\\n <meta name=\\"generator\\" content={Astro.generator} />\\n <title>{title}</title>\\n <!-- Meta tags --\x3e\\n <meta name=\\"title\\" content={title} />\\n <meta name=\\"description\\" content={description} />\\n <!-- Open Graph / Facebook --\x3e\\n <meta property=\\"og:title\\" content={title} />\\n <meta property=\\"og:url\\" content={Astro.url} />\\n <meta property=\\"og:description\\" content={description} />\\n <meta property=\\"og:type\\" content=\\"website\\" />\\n <meta property=\\"og:image\\" content={image} />\\n <!-- Twitter --\x3e\\n <meta property=\\"twitter:card\\" content=\\"summary_large_image\\" />\\n <meta property=\\"twitter:url\\" content={Astro.url} />\\n <meta property=\\"twitter:title\\" content={title} />\\n <meta property=\\"twitter:description\\" content={description} />\\n <meta property=\\"twitter:image\\" content={image} />\\n <ClientRouter />\\n </head>\\n <body>\\n <Navbar />\\n <main class=\\"container m-auto max-w-5xl px-5 pt-24 pb-10\\">\\n <slot />\\n </main>\\n </body>\\n</html>\\n\\n
The MainLayout
component supports optional title
, description
, and image
props to dynamically set SEO and social media meta tags, improving the site’s visibility. The <ClientRouter />
from astro:transitions
to enable smooth, client-side page transitions.
To display prices with corresponding currencies, we need a currency formatting utility.
\\nCreate a utils/formatter.ts
file in the src
folder and add the following:
export class Formatter {\\n static currency(value: number, decimals = 2): string {\\n return new Intl.NumberFormat(\\"en-US\\", {\\n style: \\"currency\\",\\n currency: \\"USD\\",\\n maximumFractionDigits: decimals,\\n }).format(value);\\n }\\n}\\n\\n
The Formatter
class formats a number into a U.S. dollar currency string using the built-in Intl.NumberFormat
API.
To format the media URL for an absolute URL and a relative path URL, add the following to formatter.ts
file:
export class Formatter {\\n ...\\n static formatMedia (mediaURL: string): string {\\n return mediaURL.startsWith(\\"http\\")\\n ? mediaURL\\n : `${import.meta.env.PUBLIC_URL}/media/vehicles/${mediaURL}`;\\n };\\n}\\n\\n
The formatMedia
method checks if the given mediaURL
is already an absolute URL (i.e., starts with “http”). If so, it returns the URL as-is. Otherwise, it assumes the media is stored locally and prepends a base path, constructed from an environment variable (PUBLIC_URL
), followed by the relative path to the media directory (/media/vehicles/
).
Create a interface/product-with-media.interface.ts
file in the src
folder and add the following:
export interface ProductWithMedia {\\n id: string;\\n description: string;\\n media: string;\\n media_type: string;\\n price: number;\\n brand: string;\\n slug: string;\\n stock: number;\\n tags: string;\\n name: string;\\n type: string;\\n}\\n\\n
The product list view will display all available products with their associated media (video/image).
\\nWe need to create a server action that fetches all available products with their associated media from the database. Create a products/get-products.action.ts
file in the actions
folder and add the following:
import type { ProductWithMedia } from \\"@/interfaces\\";\\nimport { defineAction } from \\"astro:actions\\";\\nimport { db, sql } from \\"astro:db\\";\\nexport const getProducts = defineAction({\\n accept: \\"json\\",\\n handler: async () => {\\n const productsQuery = sql`\\n SELECT a.*,\\n (\\n SELECT GROUP_CONCAT(media)\\n FROM ProductMedia\\n WHERE productId = a.id\\n ) AS media,\\n (\\n SELECT media_type\\n FROM ProductMedia\\n WHERE productId = a.id\\n ) AS media_type\\n FROM Product a;\\n`;\\n const { rows } = await db.run(productsQuery);\\n const products = rows.map((product) => {\\n return {\\n ...product,\\n media: product.media ? product.media : \\"no-image.png\\",\\n media_type: product.media_type \\n };\\n }) as unknown as ProductWithMedia[];\\n return {\\n products: products,\\n };\\n },\\n});\\n\\n
The getProducts
server-side action fetches a list of products from the database, including associated media data. It uses Astro’s defineAction
utility to create an endpoint that accepts JSON and runs an SQL query.
Create a index.ts
file in actions/products
folder and add the following:
import {\\n getProducts,\\n} from \\"./products\\";\\nexport const server = {\\n getProducts,\\n};\\n\\n
Next, create dashboard/products/index.astro
in the pages
folder and add the following:
---\\nimport { actions } from \\"astro:actions\\";\\nimport MainLayout from \\"@/layouts/MainLayout.astro\\";\\nimport { Formatter } from \\"@/utils\\";\\n\\nconst { data, error } = await Astro.callAction(actions.getProducts, {});\\nif (error) {\\n return Astro.redirect(\\"/\\");\\n}\\nconst { products } = data;\\n---\\n<MainLayout title=\\"Admin Dashboard\\" description=\\"Admin Dashboard\\">\\n <h1 class=\\"font-bold text-2xl\\">Dashboard</h1>\\n <div class=\\"flex justify-between items-center mt-4\\">\\n <p class=\\"font-semibold text-lg\\">Product List</p>\\n <a class=\\"bg-black text-white font-bold py-2 px-4 rounded transition-all\\"\\n href=\\"/dashboard/products/new\\">Add Product</a>\\n </div>\\n <table class=\\"w-full mt-5\\">\\n <thead>\\n <tr>\\n <th class=\\"text-left\\">Media</th>\\n <th class=\\"text-left\\">Title</th>\\n <th class=\\"text-left\\">Daily Charges</th>\\n <th class=\\"text-left\\">Inventory</th>\\n </tr>\\n </thead>\\n <tbody>\\n {\\n products.map((product) => (\\n <tr>\\n <td>\\n {\\n product.media.length > 0 ? (\\n product.media_type === \\"video\\" ? (\\n <video\\n src={Formatter.formatMedia(product.media.split(\',\')[0])}\\n class=\\"w-16 h-16 mb-2\\"\\n autoplay\\n loop\\n muted\\n />\\n ) : (\\n <img\\n src={Formatter.formatMedia(product.media.split(\',\')[0])}\\n alt={product.name}\\n class=\\"w-16 h-16 mb-2\\"\\n />\\n )) : (\\n <img src=`/media/products/no-image.png` alt=\\"No image\\">\\n )\\n }\\n </td>\\n <td>\\n <a\\n class=\\"hover:underline cursor-pointer\\"\\n href={`/dashboard/products/${product.slug}`}\\n >\\n {product.name}\\n </a>\\n </td>\\n <td>{Formatter.currency(product.price)}</td>\\n <td class=\\"justify-end\\">{product.stock}</td>\\n </tr>\\n ))\\n }\\n </tbody>\\n </table>\\n</MainLayout>\\n\\n
This page calls the getProducts
server action via Astro.callAction()
to fetch product data. If successful, the list of products is rendered in a table format.
You should see the following when visiting http://localhost:4321/dashboard
:
To create a dynamic route for the product creation and update, create the following file:
\\n/pages/products/[...slug].astro\\n\\n
[...slug]
is a dynamic segment that Astro uses to render different content based on the URL.
Add the following to products/[...slug].astro
file:
---\\nimport ProductSlideShow from \\"@/components/products/ProductSlideShow.astro\\";\\nimport MainLayout from \\"@/layouts/MainLayout.astro\\";\\nimport { actions } from \\"astro:actions\\";\\nimport { Formatter } from \\"@/utils\\";\\nconst { slug } = Astro.params;\\nconst { data, error } = await Astro.callAction(\\n actions.getProductBySlug,\\n slug ?? \\"\\"\\n);\\nif (error) {\\n return Astro.redirect(\\"/404\\");\\n}\\nconst { product, media } = data;\\n---\\n<MainLayout title=\\"Product update page\\">\\n <form>\\n <input type=\\"hidden\\" name=\\"id\\" value={product.id} />\\n <div class=\\"flex justify-between items-center\\">\\n <h1 class=\\"font-bold text-2xl\\">{product.name}</h1>\\n <button\\n type=\\"submit\\"\\n class=\\"bg-black mb-5 p-2 rounded text-white cursor-pointer\\"\\n >Save Changes</button\\n >\\n </div>\\n <div class=\\"grid grid-cols-1 sm:grid-cols-2 gap-4\\">\\n <!-- File upload --\x3e\\n <div>\\n <div class=\\"mb-4\\">\\n <label for=\\"name\\" class=\\"block\\">Name</label>\\n <input\\n type=\\"text\\"\\n id=\\"name\\"\\n name=\\"name\\"\\n value={product.name}\\n class=\\"w-full p-2 border border-gray-300 rounded\\"\\n />\\n </div>\\n <div class=\\"mb-4\\">\\n <label for=\\"slug\\" class=\\"block\\">Slug</label>\\n <input\\n type=\\"text\\"\\n id=\\"slug\\"\\n name=\\"slug\\"\\n value={product.slug}\\n class=\\"w-full p-2 border border-gray-300 rounded\\"\\n />\\n </div>\\n <div class=\\"mb-4\\">\\n <label for=\\"description\\" class=\\"block\\">Description</label>\\n <textarea\\n id=\\"description\\"\\n name=\\"description\\"\\n class=\\"w-full p-2 border border-gray-300 rounded\\"\\n rows=\\"8\\">{product.description}</textarea\\n >\\n </div>\\n <div class=\\"grid grid-cols-1 sm:grid-cols-2 gap-5\\">\\n <div class=\\"mb-4\\">\\n <label for=\\"price\\" class=\\"block\\">Daily Charges</label>\\n <input\\n type=\\"number\\"\\n id=\\"price\\"\\n name=\\"price\\"\\n value={product.price}\\n class=\\"w-full p-2 border border-gray-300 rounded\\"\\n />\\n </div>\\n <div class=\\"mb-4\\">\\n <label for=\\"stock\\" class=\\"block\\">Inventory</label>\\n <input\\n type=\\"number\\"\\n id=\\"stock\\"\\n name=\\"stock\\"\\n value={product.stock}\\n class=\\"w-full p-2 border border-gray-300 rounded\\"\\n />\\n </div>\\n </div>\\n <div class=\\"mb-4\\">\\n <label for=\\"brand\\" class=\\"block\\">Brand</label>\\n <input\\n type=\\"text\\"\\n id=\\"brand\\"\\n name=\\"brand\\"\\n value={product.brand}\\n class=\\"w-full p-2 border border-gray-300 rounded\\"\\n />\\n </div>\\n <div class=\\"mb-4\\">\\n <label for=\\"tags\\" class=\\"block\\"\\n >Tags <small class=\\"text-gray-500\\">(Separate with comas)</small\\n ></label\\n >\\n <input\\n type=\\"text\\"\\n id=\\"tags\\"\\n name=\\"tags\\"\\n value={product.tags}\\n class=\\"w-full p-2 border border-gray-300 rounded\\"\\n />\\n </div>\\n <div class=\\"grid grid-cols-2 gap-4\\">\\n <div class=\\"mb-4\\">\\n <label for=\\"tags\\" class=\\"block\\">Type</label>\\n <select\\n class=\\"w-full p-2 border border-gray-300 rounded\\"\\n name=\\"type\\"\\n >\\n <option value=\\"\\">[ Select ]</option>\\n {\\n [\\n \\"COUPE\\",\\n \\"SEDAN\\",\\n \\"SPORTS CAR\\",\\n \\"CONVERTIBLE\\",\\n \\"TRUCK\\",\\n \\"STATION WAGON\\",\\n ].map((type) => (\\n <option\\n value={type}\\n class=\\"capitalize\\"\\n selected={type === product.type}\\n >\\n {type.toUpperCase()}\\n </option>\\n ))\\n }\\n </select>\\n </div>\\n </div>\\n </div>\\n </div>\\n </form>\\n</MainLayout>\\n<script>\\n import { actions } from \\"astro:actions\\";\\n import { navigate } from \\"astro:transitions/client\\";\\n document.addEventListener(\\"astro:page-load\\", () => {\\n const form = document.querySelector(\\"form\\") as HTMLFormElement;\\n if (!form) {\\n return;\\n }\\n form.addEventListener(\\"submit\\", async (e) => {\\n e.preventDefault();\\n const formData = new FormData(form);\\n const { data, error } = await actions.createUpdateProduct(formData);\\n if (error) {\\n return alert(error.message);\\n }\\n navigate(`/dashboard/products/${data.slug}`);\\n });\\n });\\n</script>\\n\\n
This Astro component renders a product update page where users can edit and save changes to an existing product. It uses the product’s slug
from the route parameters to fetch the current product data and media via the getProductBySlug
action. If the product is not found, it redirects to a 404 page.
We’ll create the getProductBySlug
server action to fetch a single product by its slug from the database.
Create get-product-by-slug.action.ts
in actions/products
folder and add the following:
import { defineAction} from \\"astro:actions\\";\\nimport { z } from \\"astro:schema\\";\\nimport { Product, ProductMedia, db, eq } from \\"astro:db\\";\\nconst newProduct = {\\n id: \\"\\",\\n description: \\"New product description\\",\\n brand: \\"New Brand\\",\\n media: \\"no-image.png\\",\\n media_type: \\"image\\",\\n name: \\"Sample product\\",\\n price: 100,\\n slug: \\"sample-product\\",\\n stock: 5,\\n tags: \\"car,speed,modern\\",\\n type: \\"Truck\\",\\n};\\nexport const getProductBySlug = defineAction({\\n accept: \\"json\\",\\n input: z.string(),\\n handler: async (slug) => {\\n if (slug === \\"new\\") {\\n return {\\n product: newProduct,\\n media: [],\\n };\\n }\\n const [product] = await db\\n .select()\\n .from(Product)\\n .where(eq(Product.slug, slug));\\n if (!product) throw new Error(`Product with slug ${slug} not found.`);\\n const media = await db\\n .select()\\n .from(ProductMedia)\\n .where(eq(ProductMedia.productId, product.id));\\n return {\\n product: product,\\n media: media,\\n };\\n },\\n});\\n\\n
The getProductBySlug
action retrieves product data by its slug
and is designed to support both fetching existing products and preparing a template for creating new ones. It checks if the provided slug is \\"new\\"
, so it returns a default product object (newProduct
) and an empty media array. Otherwise, it queries the database for a product matching the given slug and, if found, also fetches its associated media files. If the product doesn’t exist, it throws an error.
Before we dive into the implementation of the secure file upload in Astro, let’s review the project’s file upload flow. As you can see in the diagram above, we have the form component for uploading files from the client to the server.
\\nAs users fill out and submit the form, the server receives the data, stores the product in the Product table, then uploads the media files to Cloudinary, which returns a secure URL for each uploaded media file to the server. The server proceeds to store the secure media URLs along with their associated product IDs in the ProductMedia table.
\\nWhen the user visits the /products
route, the server responds with the products along with media URLs.
This section will cover how to implement a secure file upload system in Astro using Cloudinary’s SDKs and native integration.
\\nTo configure Cloudinary in your app, you need the following credentials in your .env
file:
CLOUDINARY_CLOUD_NAME=\\nCLOUDINARY_API_KEY=\\nCLOUDINARY_API_SECRET=\\n\\n
Sign in to your Cloudinary account, then click on Go to API Keys to access the above credentials
\\n\\n
Next, create media-upload.ts
in the utils
folder and add the following:
import { v2 as cloudinary } from \\"cloudinary\\";\\n\\ncloudinary.config({\\n cloud_name: import.meta.env.CLOUDINARY_CLOUD_NAME,\\n api_key: import.meta.env.CLOUDINARY_API_KEY,\\n api_secret: import.meta.env.CLOUDINARY_API_SECRET,\\n});\\n\\nexport class MediaUpload {\\n static async upload(file: File) {\\n try {\\n const buffer = await file.arrayBuffer();\\n const base64Data = Buffer.from(buffer).toString(\\"base64\\");\\n const [fileType, format] = file.type.split(\\"/\\"); \\n const resourceType = fileType as \\"image\\" | \\"video\\" | \\"raw\\" | \\"auto\\";\\n const supportedTypes = [\\"image\\", \\"video\\"] as const;\\n\\n if (!supportedTypes.includes(fileType as typeof supportedTypes[number])) {\\n throw new Error(`Unsupported file type: ${file.type}`);\\n }\\n const dataUri = `data:${file.type};base64,${base64Data}`;\\n const resp = await cloudinary.uploader.upload(dataUri, {\\n resource_type: resourceType,\\n });\\n return {secure_url:resp.secure_url, fileType};\\n } catch (error) {\\n throw new Error(JSON.stringify(error));\\n }\\n }\\n}\\n\\n
Here, we’ve initialized Cloudinary with credentials stored in environment variables and defined a MediaUpload
class for uploading media files to Cloudinary. The upload
method accepts a File
object, reads its contents as a buffer, and converts it to a Base64-encoded data URI. It determines the file’s type and ensures it’s either an image or a video, rejecting unsupported types. Then, it uploads the file to Cloudinary using the appropriate resource_type
and returns the secure URL and file type upon success.
With the MediaUpload
util implemented, we’ll create a server action to handle file uploads. Create create-update-product.action.ts
and add the following:
import { defineAction } from \\"astro:actions\\";\\nimport { z } from \\"astro:schema\\";\\nimport { Product, db, eq, ProductMedia } from \\"astro:db\\";\\nimport { v4 as UUID } from \\"uuid\\";\\nimport { MediaUpload } from \\"@/utils\\";\\nconst MAX_FILE_SIZE = 50_000_000;\\nconst ACCEPTED_MEDIA_FILE = [\\n \\"image/jpeg\\",\\n \\"image/jpg\\",\\n \\"image/png\\",\\n \\"image/webp\\",\\n \\"image/svg+xml\\",\\n \\"video/mp4\\"\\n];\\n\\nexport const createUpdateProduct = defineAction({\\n accept: \\"form\\",\\n input: z.object({\\n id: z.string().optional(),\\n description: z.string(),\\n price: z.number(),\\n brand: z.string(),\\n slug: z.string(),\\n stock: z.number(),\\n tags: z.string(),\\n name: z.string(),\\n type: z.string(),\\n mediaFiles: z\\n .array(\\n z\\n .instanceof(File)\\n .refine((file) => file.size <= MAX_FILE_SIZE, \\"Max file size 50Mb\\")\\n .refine((file) => {\\n if (file.size === 0) return true;\\n return ACCEPTED_MEDIA_FILE.includes(file.type);\\n }, `Only supported media files are valid ${ACCEPTED_MEDIA_FILE.join(\\", \\")}`)\\n )\\n .optional(),\\n }),\\n handler: async (form) => {}\\n});\\n\\n
The createUpdateProduct
handles product creation or updates, including media file uploads. It uses defineAction
to specify that the input will come from a form and validates it using Zod (z
). Each media file is validated to ensure it’s no larger than 50MB and matches one of the accepted MIME types (images and MP4 videos).
Next, update the handler method with the following:
\\nexport const createUpdateProduct = defineAction({\\n handler: async (form) => {\\n type mediaContentObj = {\\n secure_url: string;\\n fileType: string;\\n }\\n const secureUrls: mediaContentObj[] = [];\\n const { id = UUID(), mediaFiles, ...rest } = form;\\n rest.slug = rest.slug.toLowerCase().replaceAll(\\" \\", \\"-\\").trim();\\n const product = {\\n id: id,\\n ...rest,\\n };\\n const queries: any = [];\\n\\n if (!form.id) {\\n queries.push(db.insert(Product).values(product));\\n } else {\\n queries.push(\\n await db.update(Product).set(product).where(eq(Product.id, id))\\n );\\n }\\n\\n\\n if (\\n form.mediaFiles &&\\n form.mediaFiles.length > 0 &&\\n form.mediaFiles[0].size > 0\\n ) {\\n const urls = await Promise.all(\\n form.mediaFiles.map((file) => MediaUpload.upload(file))\\n );\\n secureUrls.push(...urls);\\n }\\n secureUrls.forEach((media) => {\\n const mediaObj = {\\n id: UUID(),\\n media: media.secure_url,\\n productId: product.id,\\n media_type: media.fileType,\\n };\\n queries.push(db.insert(ProductMedia).values(mediaObj));\\n });\\n await db.batch(queries);\\n return product;\\n },\\n});\\n\\n
The handler function inserts a new product if an id
is not provided. Otherwise, it updates an existing product if an id
is provided. If media files are included, it uploads them using the MediaUpload.upload
utility, collects their secure URLs and file types, and adds them to the database as entries in the ProductMedia
table. Finally, all the queries are executed as a single batch using db.batch()
for efficient insertion/update of product and media records.
Navigate to products/[…slug].astro
and add the following, where you have <!-- File upload --\x3e
:
<div class=\\"mt-4\\">\\n <!-- File input --\x3e\\n <div class=\\"flex items-center justify-center w-full\\">\\n <label\\n for=\\"file-upload\\"\\n class=\\"flex flex-col items-center justify-center w-full h-52 border-2 border-dashed border-gray-300 rounded-lg cursor-pointer hover:bg-gray-100\\"\\n id=\\"drop-zone\\"\\n >\\n <div class=\\"flex flex-col items-center justify-center pt-5 pb-6\\">\\n <svg\\n class=\\"w-8 h-8 mb-4 text-gray-500\\"\\n fill=\\"none\\"\\n stroke=\\"currentColor\\"\\n viewBox=\\"0 0 24 24\\"\\n xmlns=\\"http://www.w3.org/2000/svg\\"\\n >\\n <path\\n stroke-linecap=\\"round\\"\\n stroke-linejoin=\\"round\\"\\n stroke-width=\\"2\\"\\n d=\\"M7 16V4a2 2 0 012-2h6a2 2 0 012 2v12m-6 4l-4-4m0 0l4-4m-4 4h12\\"\\n ></path>\\n </svg>\\n <p class=\\"mb-2 text-sm text-gray-500\\" id=\\"lbl-selected-files\\">\\n <span class=\\"font-semibold\\">Click here to upload </span> or drag/drop files\\n </p>\\n <p class=\\"text-xs text-gray-500\\">\\n MP4, SVG, PNG, JPG or GIF (max. 800x400px)\\n </p>\\n </div>\\n <input\\n id=\\"file-upload\\"\\n name=\\"mediaFiles\\"\\n type=\\"file\\"\\n multiple\\n class=\\"hidden\\"\\n />\\n </label>\\n </div>\\n <!-- Slideshow --\x3e\\n <ProductSlideShow\\n media_type={media.map((i) => i.media_type)[0]}\\n product_name={product.name}\\n media={media.map((i) => i.media)}\\n />\\n <table class=\\"w-full border mt-10\\">\\n <thead>\\n <tr>\\n <th>Media</th>\\n <th>Delete</th>\\n </tr>\\n </thead>\\n <tbody>\\n {\\n media.map(({ media, media_type, id }) => (\\n <tr class=\\"border\\" id={id}>\\n <td class=\\"flex py-2 justify-center\\">\\n {media_type === \\"video\\" ? (\\n <video\\n src={Formatter.formatMedia(media)}\\n class=\\"w-16 h-16 rounded\\"\\n autoplay\\n loop\\n muted\\n />\\n ) : (\\n <img\\n src={Formatter.formatMedia(media)}\\n alt={product.name}\\n class=\\"w-16 h-16 rounded\\"\\n />\\n )}\\n </td>\\n <td class=\\"text-center\\">\\n <button type=\\"button\\" data-id={id} class=\\"btn-delete-media rounded border cursor-pointer border-black w-10 h-10 mr-4 hover:bg-black hover:text-white transition-all\\">\\n X\\n </button>\\n </td>\\n </tr>\\n ))\\n }\\n </tbody>\\n </table>\\n</div>\\n\\n
This creates a user interface for uploading and deleting media files (images or videos) for a product. It features a drag-and-drop file upload area and allows users to either click to select files or drop them directly into the zone. It renders a ProductSlideShow
component to preview the product’s existing media.
Next, update the script with the following:
\\n<script>\\n import { actions } from \\"astro:actions\\";\\n import { navigate } from \\"astro:transitions/client\\";\\n document.addEventListener(\\"astro:page-load\\", () => {\\n const form = document.querySelector(\\"form\\") as HTMLFormElement;\\n const btnsDeleteMedia = document.querySelectorAll(\\".btn-delete-media\\");\\n const lblSelectedFiles = document.querySelector(\\n \\"#lbl-selected-files\\"\\n ) as HTMLParagraphElement;\\n const dropZone = document.querySelector(\\"#drop-zone\\") as HTMLLabelElement;\\n const fileInput = document.querySelector(\\n \\"#file-upload\\"\\n ) as HTMLInputElement;\\n if (!form) {\\n return;\\n }\\n\\n form.addEventListener(\\"submit\\", async (e) => {\\n e.preventDefault();\\n const formData = new FormData(form);\\n const { data, error } = await actions.createUpdateProduct(formData);\\n if (error) {\\n return alert(error.message);\\n }\\n navigate(`/dashboard/products/${data.slug}`);\\n });\\n\\n\\n // Drag & Drop\\n const preventDefaults = (e: DragEvent) => {\\n e.preventDefault();\\n e.stopPropagation();\\n };\\n const highlight = (e: DragEvent) => {\\n dropZone.classList.add(\\"border-blue-500\\", \\"bg-blue-50\\");\\n };\\n const unHighlight = (e: DragEvent) => {\\n dropZone.classList.remove(\\"border-blue-500\\", \\"bg-blue-50\\");\\n };\\n const createFileList = (files: File[]): FileList => {\\n const dataTransfer = new DataTransfer();\\n files.forEach((file) => dataTransfer.items.add(file));\\n return dataTransfer.files;\\n };\\n const handleFiles = (files: FileList) => {\\n const validFiles = Array.from(files).filter((file) =>\\n file.type.startsWith(\\"*\\")\\n );\\n if (fileInput && validFiles.length > 0) {\\n fileInput.files = createFileList(validFiles);\\n }\\n lblSelectedFiles.innerHTML = `<strong>${validFiles.length} archivos seleccionados</strong>`;\\n };\\n ([\\"dragenter\\", \\"dragover\\", \\"dragleave\\", \\"drop\\"] as const).forEach(\\n (eventName) => {\\n dropZone.addEventListener(eventName, preventDefaults);\\n document.body.addEventListener(eventName, preventDefaults);\\n }\\n );\\n ([\\"dragenter\\", \\"dragover\\"] as const).forEach((eventName) => {\\n dropZone.addEventListener(eventName, highlight);\\n });\\n ([\\"dragleave\\", \\"drop\\"] as const).forEach((eventName) => {\\n dropZone.addEventListener(eventName, unHighlight);\\n });\\n dropZone.addEventListener(\\"drop\\", (e) => {\\n const files = e.dataTransfer?.files;\\n if (files) {\\n handleFiles(files);\\n }\\n });\\n });\\n</script>\\n\\n
This enables file uploads with a drag-and-drop interface for media uploads, preventing default browser behavior during drag events, visually highlighting the drop zone, updating the file input with selected files, and displaying the number of files selected.
\\nThis section covers the deletion of media both from the database and Cloudinary.
\\nFirst, we’ll create a utility function to delete media files from Cloudinary. Add the following delete
function to the media-upload.ts
file:
export class MediaUpload {\\n ...\\n static async delete(mediaUrl: string, type: \\"image\\" | \\"video\\") {\\n const fileName = mediaUrl.split(\\"/\\").pop() ?? \\"\\";\\n const publicId = fileName.split(\\".\\")[0];\\n try {\\n await cloudinary.uploader.destroy(publicId, {\\n resource_type: type,\\n });\\n return true;\\n } catch (error) {\\n console.error(\\"Deletion error:\\", error);\\n return false;\\n }\\n }\\n}\\n\\n
Next, create a delete-product-media.action.ts
file in the actions/products
folder and add the following:
import { MediaUpload } from \\"@/utils/media-upload\\";\\nimport { defineAction } from \\"astro:actions\\";\\nimport { z } from \\"astro:schema\\";\\nimport { ProductMedia, db, eq } from \\"astro:db\\";\\nconst isValidMediaType = (type: string): type is \\"image\\" | \\"video\\" => {\\n return [\\"image\\", \\"video\\"].includes(type);\\n};\\nexport const deleteProductMedia = defineAction({\\n accept: \\"json\\",\\n input: z.string(),\\n handler: async (mediaId) => {\\n const [productMedia] = await db\\n .select()\\n .from(ProductMedia)\\n .where(eq(ProductMedia.id, mediaId));\\n if (!productMedia) {\\n throw new Error(`media with id ${mediaId} not found`);\\n }\\n const deleted = await db\\n .delete(ProductMedia)\\n .where(eq(ProductMedia.id, mediaId));\\n if (productMedia.media.includes(\\"http\\")) {\\n if (!isValidMediaType(productMedia.media_type)) {\\n throw new Error(`Invalid media type: ${productMedia.media_type}`);\\n }\\n await MediaUpload.delete(productMedia.media, productMedia.media_type);\\n }\\n return { ok: true };\\n },\\n});\\n\\n
This deleteProductMedia
action handles the server logic for deleting a product’s media file. It accepts a media ID as input, fetches the corresponding record from the ProductMedia
table, and throws an error if it’s not found. If the record exists, it deletes the entry from the database. If the media’s URL is an external link, it validates the media_type
, then calls MediaUpload.delete()
to remove the file from Cloudinary.
Now, update the script in products/[…slug].astro
with the following:
<script>\\n import { actions } from \\"astro:actions\\";\\n import { navigate } from \\"astro:transitions/client\\";\\n document.addEventListener(\\"astro:page-load\\", () => {\\n ...\\n btnsDeleteMedia.forEach((btn) => {\\n btn.addEventListener(\\"click\\", async (e) => {\\n const id = btn.getAttribute(\\"data-id\\");\\n if (!id) return;\\n const { error } = await actions.deleteProductMedia(id);\\n if (error) {\\n console.log(error);\\n alert(error);\\n return;\\n }\\n const trId = `#${id}`;\\n document.querySelector(trId)?.remove();\\n navigate(window.location.pathname);\\n });\\n });\\n }\\n</script>\\n\\n
This deletes media files associated with a product. Once the page is loaded, it selects all delete buttons (.btn-delete-media
) and attaches a click event listener to each. When clicked, it retrieves the data-id
of the media item, calls the deleteProductMedia
action to remove it from the database, and upon success, removes the corresponding row from the HTML table.
In this tutorial, we implemented a secure file upload by leveraging Astro’s server-side rendering (SSR) support and Cloudinary. We also explored Astro server actions for managing server/client interactions efficiently, handling multiple file uploads, validating forms, and handling errors.
\\nIf you encounter any issues while following this tutorial or need expert help with web/mobile development, don’t hesitate to reach out on LinkedIn. I’d love to connect and am always happy to help!
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nTL;DR: A critical auth bypass vulnerability (CVE-2025-29927) in Next.js lets attackers skip middleware checks by faking the x-middleware-subrequest
header. It affects versions 11.1.4 through early 15.x. Managed hosts like Vercel were safe, but self-hosted apps relying on middleware for access control are at risk. Upgrade to a patched version (13.5.6, 14.2.24, 15.2.2+), or add auth checks directly in your protected routes if you can’t upgrade yet.
Next.js was recently at the center of a major web development controversy due to a critical vulnerability that allows unauthenticated users to bypass its authorization mechanisms. This vulnerability has been assigned the reference CVE-2025-29927 and a CVSS 3.1 score of 9.1.
\\nThese numbers may seem cryptic if you’re not familiar with security, but here’s what you need to know:
\\nThe vulnerability affects all versions of Next.js from 11.1.4 up to (but not including) the patched releases: 13.5.6, 14.2.24, and 15.2.2.
\\nIn this article, we’ll break down what causes this issue, how to prevent it, and how to structure your app to avoid being exposed.
\\nDiscovered by two security researchers who go by the pseudonyms zhero and inzo while digging into the Next.js source code, this vulnerability lets attackers bypass security checks by manipulating an internal HTTP header called x-middleware-subrequest
. If you’ve worked on a Next.js application with auth, you know the middleware is usually where you’d handle these checks to make sure users are allowed to access certain pages and redirect them based on their role.
Say you’re building a SaaS app. You’d typically check in the middleware if a user is a paying customer before letting them access paid features, or redirect them to the pricing page if they’re not:
\\n// middleware.js\\n\\nimport { NextResponse } from \'next/server\';\\n\\n// Helper functions\\nasync function isValidSession(sessionToken) {\\n …\\n}\\n\\nasync function isPaidUser(sessionToken) {\\n …\\n}\\n\\nexport async function middleware(request) {\\n\\n const { pathname } = request.nextUrl;\\n\\n // Paths to dashboard and pricing page\\n const dashboardPath = \'/dashboard\';\\n const pricingPath = \'/pricing\';\\n\\n // Check if the user is trying to access the dashboard or any paid features area\\n if (path.startsWith(dashboardPath) || path.startsWith(\'/paid-features\')) {\\n // Get the session cookie\\n const session = request.cookies.get(\'session\')?.value;\\n\\n // If session is not valid, redirect to login\\n if (!session || !(await isValidSession(session))) {\\n return NextResponse.redirect(new URL(\'/login\', request.url));\\n }\\n\\n // Check if the user is a paid user\\n if (!(await isPaidUser(session))) {\\n // If not a paid user, redirect to the pricing page\\n return NextResponse.redirect(new URL(pricingPath, request.url));\\n }\\n\\n // If the user has a valid session and is a paid user, allow access\\n return NextResponse.next();\\n }\\n\\n // For other paths, continue with the request\\n return NextResponse.next();\\n}\\n\\n
This middleware vulnerability gives attackers a way to bypass these checks and access protected routes in your app, basically letting them use paid features without paying.
\\nMiddleware is a core concept in many web frameworks. It’s a layer of code that sits between the request and response cycle of an application, often used to perform generic actions like error handling, authentication, authorization, path rewrites, and more.
\\nNext.js follows this same idea, but with a twist: its middleware runs by default on every route. That’s different from other frameworks, where middleware usually needs to be attached to specific routes or route groups. Next.js does let you limit its scope using route matching, but the default behavior is global.
\\nBecause of this, the middleware is a common place to handle security logic like authentication and authorization, so you don’t have to repeat it across routes.
\\nThat global behavior, combined with the internal x-middleware-subrequest
header, created an exploitable vulnerability that attackers used to bypass security measures entirely.
The vulnerability centers around the x-middleware-subrequest header
, which is meant to prevent infinite middleware recursion. Below is a screenshot from the blog post published by the researchers who found the bug, showing how this header is used in the Next.js codebase:
This part of the codebase runs your middleware and uses the x-middleware-subrequest
header to track how many times it has been executed during a single request.
When a user visits a protected route, the middleware might use NextResponse.rewrite()
or NextResponse.redirect()
to either rewrite the request to a different internal path or send a redirect back to the client after successful authentication. For example, if a user logs in, the middleware might rewrite /login
to /admin
. Since this triggers a new internal request to /admin
, Next.js considers it a subrequest.
Let’s say the user is authorized and tries to load a resource, like an image, that requires an internal fetch to the server. If that fetch also triggers middleware logic (e.g., for checking session data), it becomes another subrequest.
\\nTo prevent the middleware from getting stuck in an infinite loop, Next.js tracks the chain of these subrequests using the x-middleware-subrequest
header. It appends the middleware name to the header as a colon-separated string. Each time the middleware runs, it reads this header, splits it into an array, and counts how many times its own name appears.
If the middleware has been triggered more than five times (as defined by the MAX_RECURSION_DEPTH
constant), Next.js halts any further middleware execution for that request.
By manually adding the x-middleware-subrequest
header with the right value, an attacker can trick Next.js into thinking the middleware has already run five times, when it hasn’t run at all.
This will cause the runtime to skip the middleware completely. Execution will jump directly to the if
statement, exit early, and forward the request via NextResponse.next()
, sending the request through without any authentication or authorization checks.
Then, all an attacker would need is the name and path of the middleware file. Which, due to Next.js’ predictable naming conventions, is relatively easy to guess.
\\nIn older versions of Next.js (v11 to v12), middleware files had to be named _middleware.ts
and placed inside the pages
directory. That made them easy to locate, so the exploit could be used like this:
x-middleware-subrequest: pages/_middleware\\n\\n
In newer versions (v12.2 and up, including v13+), the naming convention changed to middleware.ts
, and the file is typically placed in the project root, or the src
directory if that’s configured. This makes the exploit just as easy to apply, and it would look something like this:
x-middleware-subrequest: middleware\\n\\n
Or:
\\nx-middleware-subrequest: src/middleware\\n\\n
In newer versions, starting with v14+, the logic changed slightly. To make the exploit work, the header value now needs to be repeated multiple times:
\\nx-middleware-subrequest: middleware:middleware:middleware:middleware:middleware\\n\\n
Or:
\\nx-middleware-subrequest: src/middleware:src/middleware:src/middleware:src/middleware:src/middleware\\n\\n
Contrary to what you might think, not every Next.js app was affected by this vulnerability. Apps hosted on managed platforms like Vercel and Netlify were automatically protected. The same goes for static sites because they don’t rely on middleware, and apps using Cloudflare with properly configured (WAF) rules.
\\nThe real risk was for self-hosted apps that rely on middleware for security checks without any fallback or secondary validation. You might think there aren’t many self-hosted Next.js apps out there, but with all the recent noise about Vercel’s surprise billing, more devs are moving to self-hosted setups.
\\nI’m not going to dive into the many advanced ways to fix this vulnerability; I’ll leave that to the experts. What I will do is show you, as a frontend developer, how you can protect your Next.js applications from this vulnerability.
\\nAs I mentioned earlier, this vulnerability has been patched in the latest versions of Next.js, so the fastest way to secure your app is to update to the latest release (v15 at the time of writing). But not everyone can do that right away. This vulnerability affects old versions, and plenty of teams stick with older versions of Next.js for the sake of stability or to avoid breaking changes. Updating might not be an easy lift.
\\nIf that’s the case, your best option is to add extra security checks directly inside protected routes. For example, on an admin page, you could do something like this:
\\nimport { getServerSession } from \\"next-auth/next\\";\\nimport { authOptions } from \\"@/app/api/auth/[...nextauth]/route\\";\\nimport { redirect } from \\"next/navigation\\";\\n\\nexport default async function DashboardPage() {\\n\\n const session = await getServerSession(authOptions);\\n\\n // Session check\\n if (!session) {\\n redirect(\\"/api/auth/signin\\");\\n }\\n\\n // Session integrity check\\n const isSessionValid = verifySessionIntegrity(session);\\n if (!isSessionValid) {\\n redirect(\\"/api/auth/signout?error=session-integrity\\");\\n }\\n\\n return (\\n …\\n );\\n}\\n\\n// Helper function\\nasync function verifySessionIntegrity(session) {\\n // Check for session tampering signs\\n if (!session.user?.email || !session.user?.id) return false;\\n\\n // Compare with database record for consistency\\n const userRecord = await getUserFromDb(session.user.id);\\n if (!userRecord) return false;\\n\\n return true;\\n}\\n\\n
I know this almost defeats the whole point of using middleware in the first place. It can get tedious, especially if your app has many pages. But if you’re stuck on an older version, this extra step is a small price to pay to keep your app secure.
\\nAlways stay informed about security advisories for all your dependencies. Subscribe to security mailing lists and enable security alerts for your repositories to catch vulnerabilities early.
\\nUpdate your dependencies regularly. Considering that this vulnerability traces back as far as v11, regular updates are crucial to maintaining security.
\\nBe cautious when relying on HTTP headers for security-related decisions. They can be easily manipulated by bad actors.
\\nAt first glance, Next.js’s middleware vulnerability might not seem like a big deal. Maybe it just lets someone use a paid app for free, right? Not exactly. It’s much more serious for apps that expose sensitive data, like internal reports or confidential documents, that aren’t strictly tied to a user account. In those cases, unauthorized access could lead to compliance issues and serious damage to your reputation.
\\nThe goal of this article is to help developers and app owners understand the risk and take steps to protect their Next.js apps. It’s been a little over a month since the patch was released, but most of the conversation has stayed on social media. Hopefully, this gives some clarity to those who might have missed it.
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nNode.js 24 officially launched on May 6, 2025, bringing fresh updates focused on innovation and long-term stability. It’s set to enter LTS (Long-Term Support) in October 2025, making it a key version for developers to adopt in production environments.
\\nIn this article, we’ll break down Node.js’s release cycle, highlight the most important new features, and walk you through what you need to do to get your projects ready for the update.
\\nNode.js has a dual-track system: even vs. odd versioning. Even versions (like 20.x
and 22.x
) are candidates for LTS (Long-Term Support) and get 30 months of support, making them the safest choice for most teams. Odd versions (like 21.x
and 23.x
) are short-term and ideal for testing new features, but aren’t meant for long-term use.
Node.js release goes through three stages: Current, Active LTS, and Maintenance LTS. Let’s break it down based on the release schedule as of May 2025:
\\nWe can see that Node.js 23 is the current stage. It’s for early adopters who want access to the newest JavaScript features and V8 engine updates. That sounds pretty cool, but this version is short-lived , supported for only 6 months, and may include breaking changes, making it unsuitable for production. On the other hand, Node.js 22.x is in the Active LTS stage. It’s fully stable, receives security patches and critical bug fixes, and is the right choice for any production system or long-term project. Lastly, Node.js 20.x has entered the Maintenance LTS stage, meaning it only gets security fixes. It’s time to consider upgrading if you’re still running on it.
\\nNode.js’s release model may look confusing at first, but it’s carefully designed to balance stability and innovation. Enterprises get access to stable, long-term supported versions, while developers can safely explore and test new features without risking production systems.
\\nWith Node.js 24 now available and LTS just around the corner, this release brings meaningful updates that reflect the platform’s continued evolution. From performance improvements to enhanced language features, Node.js 24 introduces tools that make modern development faster, more secure, and more efficient. Here’s a look at what’s new.
\\nNode.js 24 is a major release packed with exciting new features, including a V8 Engine upgrade to v13.6. Let’s walk through the key updates, grouped by V8 engine upgrade, performance, security, developer experience, and stability, so you can prioritize what matters most for your projects.
\\nA key highlight of Node.js 24 is upgrading the V8 engine to version 13.6. The V8 engine in Node.js is a high-performance JavaScript engine developed by Google that executes JavaScript code.
\\nBelow are the new features offered in the V8 upgrade:
\\nFloat16Array
enables more efficient storage and manipulation of 16-bit floating-point numbers, which is particularly useful for machine learning, graphics processing, and other computing tasks where memory efficiency is critical.using
and await using
features from a TC39 proposal. The feature simplifies handling cleanup operations (like closing files or releasing memory), making code simpler and lowering the chance of memory leaks.RegExp.escape
provides a convenient way to escape special characters in regular expressions, making pattern construction safer, especially when dealing with dynamic input.WebAssembly Memory64
extends WebAssembly’s capabilities by supporting 64-bit memory addressing, enabling larger and more complex apps to run efficiently.Error.isError
offers a standardized way to check if an object is an Error
instance. It is helpful in apps that deal with errors from different execution contexts.These new features in the V8 upgrade bring Node.js closer to the latest ECMAScript proposals, delivering better performance, safety, and developer experience.
\\nThe built-in Undici HTTP client is upgraded to version 7.0 in Node.js 24. Undici is built-in HTTP client in Node.js developed by Node.js developers to improve the HTTP stack performance without breaking the existing API.
\\nThe new version introduces significant performance improvements and features, including enhanced connection pooling and better HTTP/2 support. The update brings measurable speed boosts, with benchmarks showing up to 30% faster requests than previous versions.
\\nOther key additions include improved WebSocket client capabilities, more stable retry mechanisms, and smarter load balancing across connections. These changes make Undici v7 a more powerful tool for high-performance HTTP communication in Node.js apps.
\\nNode.js 24 introduces an important under-the-hood improvement for AsyncLocalStorage
. It now defaults to using the new AsyncContextFrame
implementation. This change brings performance benefits while maintaining the existing API. It is good news for developers working on a distributed tracing/logging system or dealing with request context propagation.
Node.js 24 has officially promoted its permission model out of experimental status. The once --experimental-permission
flag, now becomes --permission
. It is a clear signal that this security feature is ready for production.
The permission model is Node.js’s answer to modern security challenges, allowing us to restrict filesystem, network, and environment access.
\\n\\nHere is a simple example:
\\n// Run your Node.js application with permissions enabled\\n$ node --permission --allow-fs-read=/allowed/path app.js\\n\\n
After running the above command, our application can only read from /allowed/path
, and all other filesystem access is denied by default.
This stabilization marks an important milestone in Node.js’s security evolution.
\\nchild_process
argument handlingNow, the way to pass arguments to child_process.spawn()
and execFile()
has changed to disallow string arguments. Instead, Node.js enforces explicit array-based argument passing to prevent shell injection risks and improve consistency.
Node.js 24 ships with npm v11, bringing a number of improvements in performance and security.
\\nSome notable changes are:
--
ignore-scripts
flag now applies to all lifecycle scripts, preventing potentially unsafe script execution.^20.17.0 || >=22.9.0
, keeping npm aligned with recent Node.js LTS releases.Node.js 24 brings an improvement for web developers: URLPattern
is now available as a global object, just like its cousin URL
. This means no more pesky imports cluttering up our routing files!
A URL pattern is like regular expressions for URLs, but with a much cleaner syntax that’s easier to read and maintain.
\\n// No need to import anything!\\nconst userRoute = new URLPattern({ pathname: \'/users/:id\' });\\n\\n// Test a URL\\nconst match = userRoute.exec(\'https://example.com/users/42\');\\nconsole.log(match.pathname.groups.id); // Outputs: \\"42\\"\\n\\n
This feature helps with API endpoint validation by matching and handling specific route patterns. Developers can use it to create simple, custom routing systems without relying on large libraries. It’s also useful in web scrapers for processing and extracting data from structured URLs.
\\nThe built-in Node.js test runner now automatically waits for all subtests to complete.
\\n// before Node.js v24\\ntest(\'API test suite\', async (t) => {\\n const api = await setupTestAPI();\\n // Had to remember to await each subtest\\n await t.test(\'GET /users\', async () => {\\n const response = await api.get(\'/users\');\\n deepStrictEqual(response.status, 200);\\n });\\n});\\n\\n// after Node.js v24\\ntest(\'API test suite\', async (t) => {\\n const api = await setupTestAPI();\\n // No awaits needed - runner handles it automatically\\n t.test(\'GET /users\', async () => {\\n const response = await api.get(\'/users\');\\n deepStrictEqual(response.status, 200);\\n });\\n});\\n\\n
This enhancement makes Node.js’s test runner more intuitive. It’s one of those quality-of-life improvements that will quietly improve our testing experience.
\\n\\nThere are a few legacy APIs deprecated or removed in this release.
\\nurl.parse()
is deprecated. It is recommended that the WHATWG URL API
to be used as it is more standards-compliant and secure.
// Deprecated (throws runtime warning)\\nconst parsed = require(\'url\').parse(\'https://example.com\');// alternative\\nconst parsed = new URL(\'https://example.com\');\\n
The deprecated tls.createSecurePair
is removed.
// Removed (no longer available)\\nrequire(\'tls\').createSecurePair();\\n\\n// Use TLSSocket instead\\nnew tls.TLSSocket(socket, options);\\n\\n
SlowBuffer
is deprecated now. If it is used, a runtime warning will be thrown.
// Deprecated (use Buffer.allocUnsafeSlow)\\nconst slow = new SlowBuffer(10);\\n\\n// Modern alternative\\nconst slow = Buffer.allocUnsafeSlow(10);\\n\\n
new
keyword for REPL/Zlib classesIt is now runtime-deprecated to create a REPL instance or use Zlib classes without the new
keyword. This change aims to better align with standard JavaScript class conventions.
As you plan your upgrade, keep in mind that some APIs and patterns have been deprecated. These changes may require updates to legacy code, especially if your project uses features like REPL or Zlib without the new
keyword, or passes arguments incorrectly to child_process
methods. Reviewing the official Node.js 24 release notes and using tools like node --trace-deprecation
during testing can help you spot and fix these issues early.
To upgrade, you can use a version manager like nvm
. You can install it pretty easily with the code below:
nvm install 24\\nnvm use 24\\n\\n
That, or you can download the latest version directly from the Node.js website.
\\nSince Node.js 24 will enter LTS in October 2025, now’s the time to explore its new features. Whether you’re maintaining node apps or building new projects, getting ahead of the curve will help keep your stack secure, stable, and future-ready.
\\nNode.js 24 reflects the community’s ongoing commitment to balancing cutting-edge features with long-term stability. Now’s the perfect time to explore what’s new, test your code, and prepare for the upcoming LTS release. Your future-ready projects start here—happy coding!
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nThere is a growing demand for Large Language Models (LLMS) that can work offline and locally on someone’s machine. This allows for a workflow that is cost-efficient, reliable, and private. Platforms like Ollama can simplify the process by making it easy to download open source models directly onto your hardware. Developers are then able to run models like Llama 3, Gemma 3, and DeepSeek R1 without depending on external API calls, which can be costly and time-consuming. The largest benefit of the process running on a local machine is the ability to access secure AI integration.
\\nIn this article, we will explore the benefits of using local LLMs beyond the simple, reactive chatbots with which we’re all familiar. We’ll also cover what it’s like to work with AI agents, which are systems capable of autonomous planning, tool utilization, and more complex goals. We’ll demonstrate these agents through an agentic AI workflow, learning how to integrate local models served via Ollama with a React frontend.
\\nArtificial intelligence has gone through many eras, each one offering significant improvements and creating more powerful and complex systems. Rule-based systems, often called expert systems, dominated during the early days of AI, with humans manually encoding information in IF-THEN rules and statements. This was effective for well-defined problems in smaller use cases, but these types of systems were not very intuitive, difficult to scale, and couldn’t handle challenging problems or learn from more data.
\\nThe second major leap came with machine learning and neural networks, especially deep learning. These tools were capable of learning and creating patterns directly from large data instead of explicit rules. This transition made breakthroughs possible in image recognition, speech processing, and eventually led to present-day LLMs.
\\nLLMs are typically based on transformer models and can demonstrate a new ability to understand, generate, and process human language, enabling applications like advanced chatbots, content generation systems, and complex question-answering systems.
\\nThere are even more powerful LLMs that can operate reactively, which means that they receive an input (prompt) and produce an output. This led to the creation of AI systems that can proactively solve complex, multi-step problems, as well as the emergence of agentic AI systems.
\\nAn agentic workflow usually means a staged set of steps, possibly incorporating multiple LLM calls, tools, or data processing steps, staged to produce a specific output. You can think of it like a predefined recipe where AI elements execute given tasks.
\\nThe majority of physical robots in the real world depend on pre-coded instructions or rule-based systems, which limit their autonomy. Even though they are able to perform workflows, which are typically a simple collection of operations, they usually lack the capabilities to reason or adapt. An AI agent, however, is defined by its ability to reason, plan, and act autonomously towards the achievement of a higher-order goal. Agentic workflows are not the same as traditional ones because the agent can dynamically change its strategy in response to feedback, environmental shifts, or new information, as opposed to sticking to a predefined script.
\\nFor example, a traditional warehouse robot would travel along a set path to collect and drop off goods, and if it finds an obstruction, it stops and waits for a human to intervene. An AI agent in the same setting, however, might redo its path, reorder its activities based on delivery priority, or even collaborate with other agents to streamline the entire process. It is this ability to change and make independent choices that defines agentic behavior.
\\nThe creation of agentic AI systems is a huge milestone toward more capable and independent AI. With reasoning and language capabilities of modern LLMs, these systems are supposed to be capable of performing tasks that require planning, working with external tools (like APIs or databases), and are able to retain context across deep and long interactions. They do not just react to questions and prompts; these types of agents can actively pursue goals.
\\n\\nFor an AI agent to function effectively, it usually relies on several important components
\\nThese are the core principles for building sophisticated agentic workflows, and we will explore them in the following sections using Ollama and React.
\\nWhen it comes to developing agentic AI workflows, having the ability to use local models with tools like Ollama can offer significant advantages over using cloud-based solutions.
\\nOne advantage is enhanced data privacy. When processing sensitive or proprietary information on local hardware, the data never leaves the controlled environment, meaning it is always secure. This reduces the likelihood that the data can be manipulated or lost, which can be the case when working with external factors.
\\nIn addition to privacy, local models have economic and practical benefits. Local model usage can lead to significantly reduced long-term operational costs, especially for high-frequency or high-volume usage, as you avoid having to set up recurring subscription fees for cloud-based APIS.
\\n\\nLocal models also allow for better offline capabilities, meaning that agentic workflows can run continuously in offline setups without an internet connection. In the process, they ensure uninterrupted functionality and enhance the number of deployment scenarios for your AI applications, making them more robust and less reliant on network connections.
\\nPerformance issues can also be improved when working with local models. This is because when you eliminate the need to transmit data to and from distant servers, local processing drastically reduces latency, which results in quicker response times essential for interactive or real-time agentic work. Even though Cloud infrastructure can be reliable, when you have immediate access and control over the local environment, it allows for better performance optimization.
\\n\\nPrivacy, cost advantages, offline support, and performance optimization are powerful reasons for including local models in an agentic AI development workflow.
\\nOllama is a strong, open source tool that has been designed to simplify running large language models on a machine. Ollama packages models, their weights, configurations, and dependencies into a single package that’s easily distributable. It’s a streamlined process for running various LLMS without the pain of dependencies and frameworks.
\\nOllama has both a command-line interface (CLI) and an API, so it is easily accessible for direct use and programmatic integration into applications. Its primary function is to serve these models so that you can query them through a simple interface.
\\nOllama has an increasing list of available models that you can easily download and run. These range from very popular open source models with a variety of sizes and capabilities, such as Llama 3, Mistral, Gemma, and more. Their capabilities vary, with some leaning towards general text generation and conversation, and some being better for highly specialized use cases like code generation, summarization, or even multimodal input processing (e.g., text and images).
\\nThe features are dependent on the model that you choose, and in Ollama, you can experiment with different models to choose the best option for your project. You can get a list of the models available and their parameters from either the Ollama GitHub repository or the Ollama model search page.
\\nFirst, download the Ollama installer application. Go to the Ollama website and download the installer for your operating system as shown here:
\\nOn the next screen, we can choose our operating system before downloading the application. Once you download the application, install it on your machine. When Ollama has been installed, it will run as a background service on your machine. The main way to interact with it is through the command line.
\\nWhen your application is up and running, go to the Ollama search page to find a model to download. Each model can come in different variants (e.g., q4KM, q6_K, f16, etc.), which affect both size and performance. Generally, larger files mean more parameters and better performance, but take up more disk space and memory. Smaller ones are distributed compressed to save space and run faster on lower-end hardware, but sometimes at the cost of accuracy or functionality.
\\nModel files tend to be approximately 1 GB to 50 GB, depending on the model and quantization. Ideally, you will need significant disk space and a reasonably fast machine with plenty of RAM (8 GB minimum, 16 GB+ preferable) for comfortably running large models.
\\nThe following commands will download the LLM onto your machine:
\\n# Example syntax to pull a model\\nollama pull <model_name>\\n\\n# Pulling llama3.2\\nollama pull llama3.2\\n\\n
If you want to view the LLMS you have downloaded, you can run this command to list them:
\\nollama list\\n\\n
You can learn all of the commands by visiting the official Ollama repo on GitHub.
\\nThe final step is running the model via the command line. Each model has instructions on its page for how to run the model. For example, we can use the following command to run llama3.2
on our machine:
ollama run llama3.2\\n\\n
After running the LLM, you will see the familiar chat prompt where you can talk to the LLM much like the chat prompts we are familiar with, like ChatGPT, Claude, etc.
\\nAnd that’s it! Now you can safely and privately run LLMS locally on your machine for free.
\\nOur travel planning app is being developed with two interfaces: Gradio and React. Both are designed to share nearly identical core functionality.
\\nGradio is an open source Python library perfect for rapidly developing interactive web demos of machine learning models. It’s convenient for quickly exposing and iterating on our Ollama-driven agent’s core logic. Gradio’s biggest strengths are its ease and speed for prototyping AI capability with minimal code, which is extremely valuable when testing interactions and visualizing the flow. However, that ease of use means that we have less control and fewer customization options with Gradio, which is important when building a user interface.
\\nGradio is very popular in the AI field, so it is worth learning but if we wanted to have more advanced interactive capabilities, precise control over the user experience, and seamless integration with more complex web application features, it would be ideal to use a JavaScript framework or an alternative library more suited for a production application.
\\nReact, on the other hand, unlocks the ability to create a genuinely professional, scalable, and maintainable application. As a leading JavaScript library, React offers much better ease of use when building complex, dynamic user interfaces with sophisticated interactions that can provide an enhanced user experience. The fact that React frontends interact with your backend and, in this case, handle the Ollama calls and agent rules using conventional API calls is especially useful.
\\nThis UI decoupling from the internal AI backend is important for creating solid applications that can scale with ease, integrate with other services, become maintainable, and provide a professional-level user interface — something that Gradio doesn’t offer, as it’s built more for demoing features.
\\nYou can find the source code for this project here. All you need to do is set it up on your local machine to get it running.
\\nThe technical stack is as follows:
\\n\\nThis application uses Ollama to run models locally on your machine. Make sure that you have Ollama installed and running and that you have downloaded at least one LLM, which we did in the previous section.
\\nNow, follow the steps below to set up the project:
\\nRun the following command in your terminal and clone this Git repository somewhere on your local machine:
\\ngit clone https://github.com/andrewbaisden/travel-planner-ai-agent-app.git\\ncd travel-planner-ai-agent-app\\n\\n
You should now have an identical copy of this repo on your machine.
\\nNext, we need to set up our Python and FastAPI backend. In the root directory of the travel-planner-ai-agent-app
folder, run the following commands:
\\nDepending on your setup, you might need to use either the
python
orpython3
command.
# Create a Python virtual environment\\npython3 -m venv venv\\nsource venv/bin/activate # On Windows, use: venv\\\\Scripts\\\\activate\\n\\n# Install Python dependencies\\npip3 install -r requirements.txt\\n\\n# Change into the backend folder\\ncd backend\\n\\n# Start the FastAPI servers\\npython3 api.py # Run the FastAPI API server\\npython3 app.py # Run the Gradio frontend\\n\\n
These commands create a Python virtual environment, install the Python dependencies, and get our backend servers running. We have a FastAPI server that has the endpoints for our frontend to use. We also have a Gradio interface for connecting to our backend, which is good for demos.
\\nFinally, let’s get our React frontend up and running. Run these commands to complete the setup process:
\\n# Navigate to the frontend directory\\ncd frontend\\n\\n# Install dependencies\\nnpm install\\n\\n# Start the development server\\nnpm run dev\\n\\n
Our application should be fully working!
\\nBelow are the key endpoints and interfaces for interacting with the backend:
\\nThis launches a local web interface to interact with the Travel Planner AI agent.
\\nThis is what our Gradio interface looks like in simple mode:
\\nThis is what our Gradio interface looks like in agentic mode:
\\nBelow is the main URL for accessing the frontend application:
\\nThis opens the user-facing web interface for interacting with the Travel Planner agent. Our React application looks like the following in simple mode:
\\nThis is what our React app looks like in agentic mode:
\\nIn theory, this application can work with various vendors’ LLMS. However, I find it performs best with the Llama models (e.g., Llama 3.1, Llama 3.2).
\\nGenerating a travel plan locally can be a bit slow, depending on your machine’s performance. That’s because you’re running these LLMS directly on your machine, not through cloud-based services. Unlike online LLM platforms, which run on powerful servers built to handle thousands of requests at once, your local setup is limited by your device’s hardware.
\\nOn my M1 MacBook Pro, the simple workflow generated a plan in about one to two minutes, while the agentic workflow took over three minutes to achieve the same result. Of course, this is just a demo app and not meant for production, so these times are acceptable for experimentation.
\\nThere are many platforms available for creating AI agents. Let’s review some of the popular options and what they have to offer:
\\nDeveloped by Microsoft, Semantic Kernel is an open source SDK that is used to embed large language models (LLMS) inside mainstream programming languages. It is intended to manage AI workflows and bring AI capabilities into mainstream apps with an emphasis on enterprise applications and multi-language support (primarily C#, Python, and Java).
\\nThis architecture is designed to connect LLMS with external datasets, enabling the creation of AI agents that are capable of accessing, consuming, and reasoning about private or domain-specific data. LlamaIndex is good at knowledge-intensive applications and offers data indexing, retrieval functionality, and support for various data storage solutions.
\\nDesigned as an extension to the LangChain ecosystem, LangGraph provides a graph-based method for building stateful, multi-actor applications with LLMS. It is well-suited to creating complex, dynamic workflows that require cycles and the capacity to remember conversational state, and allows for much better control over agent conversation direction.
\\nBasing itself on experimental OpenAI Swarm, this SDK offers a more standard toolkit for developing reasoning, planning, and externally calling functions or API-calling agents. It offers primitives for agent specification, task transfer between agents, and safety features to enable multi-step or multi-agent processes to be simpler, especially within OpenAI.
\\nAn open source platform primarily focused on building conversational AI agents or chatbots. Rasa provides features for natural language understanding, conversation management, and integration with other messaging platforms to enable developers to create interactive and context-aware AI assistants.
\\nThe Hugging Face platform uses the popular Hugging Face Transformers library to enable the creation of agents that can use tools and perform complex tasks by interacting with models on the Hugging Face Hub or elsewhere. It provides a general-purpose solution to create agents with access to lots of pre-trained models and community-contributed tools.
\\nA low-code, open source platform that allows users to visually develop and deploy LLM apps, like AI agents, from a drag-and-drop interface. It supports integration with various data sources, LLMS, and tools, so it would be good for users who prefer a graphical IDE.
\\nWhile a general-purpose workflow automation platform, n8n has the ability to embed AI agents into automations. Its graphical workflow designer offers the capability to connect applications and services, including AI models, to build complex automations that can use agent-like functionality.
\\nBelow are some other tools to consider:
\\nBuilding an agentic AI workflow can give us valuable insights into how large language models are used for developing intelligent, task-specific systems. One of the biggest advantages they offer is being able to use local models with the help of tools like Ollama, which addresses the main concerns when using these LLMS online, including data privacy, cost saving, offline access, and performance optimization. Combining this local AI functionality with a React frontend allows for robust, user-friendly applications.
\\nThe development of tools like Ollama and more efficient open source models demonstrates the growing importance of local models in the future of AI. They are essential for creating more private and manageable AI, making their powerful abilities more accessible to users and their data. Building with local models is a good investment for creating a world where we have a distributed, privacy-conscious world of AI.
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nuseState
for managing filter state\\n useSearchParams
\\n useSearchParam
for state management\\n useSearchParams
\\n URL state management is the practice of storing application state in the URL’s query parameters instead of component memory. useSearchParams
is a Hook that lets you read and update the query string in the browser’s URL, keeping your app’s state in sync with the address bar.
If you’re using the useState
Hook to manage filters or search parameters in your React app, you’re setting your users up for a frustrating experience. When a user refreshes the page, hits the back button, or tries to share a filtered view, all their selections vanish. That’s because the component state lives only in memory.
A better approach is to store the state in the URL. It keeps filters persistent, makes views shareable, and improves the user experience. Routing solutions like React Router and Next.js offer built-in ways to work with URL query parameters, making this approach straightforward to implement.
\\nIn this article, we’ll focus on React Router’s useSearchParams
Hook and show how to manage the state through the URL for a more resilient, user-friendly app.
To see the benefits of a URL-based state in action, we built a simple country explorer app with filters for name and region.
\\nOne version uses useState
, storing the filter state locally (see the useState
live demo here).
The other uses useSearchParams
, which stores the state in the URL (see the useSearchParams
demo here):
The difference is clear: one forgets your selections after a refresh or navigation, while the other remembers them and makes the view easily shareable. This subtle shift results in a far smoother, more reliable user experience.
\\nEditor’s note: This post was updated by Ibadehin Mojeed in May 2025 to contrast useState
and useSearchParams
through the use of two demo projects and address FAQs around useSearchparams
.
useState
for managing filter stateIn the useState
version of our country explorer app, we manage the search query and region filter using the local component state:
const [search, setSearch] = useState(\'\');\\nconst [region, setRegion] = useState(\'\');\\n\\n
Country data is fetched from the REST Countries API using React Router’s clientLoader
:
export async function clientLoader(): Promise<Country[]> {\\n const res = await fetch(\'https://restcountries.com/v3.1/all\');\\n const data = await res.json();\\n return data;\\n}\\n\\n
While the method of filtering the data itself isn’t the focus here, how we store the filter state is. In this implementation, user input from the search and region fields is captured via onChange
handlers and stored locally using useState
:
<div className=\\"flex flex-col sm:flex-row gap-4 mb-8\\">\\n <div className=\\"relative w-full sm:w-1/2\\">\\n <input\\n type=\\"search\\"\\n placeholder=\\"Search by name...\\"\\n value={search}\\n onChange={(e) => setSearch(e.target.value)}\\n // ...\\n />\\n </div>\\n <select\\n value={region}\\n onChange={(e) => setRegion(e.target.value)}\\n // ...\\n >\\n </select>\\n</div>\\n\\n
Because this filter state is stored only inside the component, it resets on page reload, can’t be bookmarked or shared, and isn’t accessible outside its local tree. That’s the key limitation of using useState
here.
useSearchParams
To address these limitations, we can move the filter state into the URL using query parameters. This approach, shown in the demo earlier, preserves filter settings across reloads, enables easy sharing via links, and greatly improves the user experience.
\\nFor example, after selecting a region and typing a country name, the URL might look like this:
\\nhttps://use-search-params-silk.vercel.app/url-params?region=asia&search=vietnam\\n\\n
This URL encodes the app’s current filter state, making it easy to bookmark, share, or revisit later.
\\nuseSearchParam
for state managementReact Router’s useSearchParams
Hook lets us read and update URL query parameters (the part after the ?
). It behaves much like useState
, but instead of storing values in memory, it stores them directly in the URL. This makes the filter state stay persistent through reloads.
In our Country Explorer app, we use it like this:
\\nconst [searchParams, setSearchParams] = useSearchParams();\\n\\n
Here, searchParams
is an instance of the URLSearchParams
object reflecting the current query parameters in the URL. The setSearchParams
function updates these parameters, which in turn updates the URL and triggers navigation automatically.
To access filter values stored in the URL, we extract them using the searchParams
object like this:
const search = searchParams.get(\'search\') || \'\';\\nconst region = searchParams.get(\'region\') || \'\';\\n\\n
Since URL parameters are always strings, it’s important to convert them to the appropriate types when needed. For example, to handle numbers or booleans, we can do:
\\nconst page = Number(searchParams.get(\'page\') || 1);\\nconst showArchived = searchParams.get(\'showArchived\') === \'true\';\\n\\n
This ensures our app correctly interprets the parameters and maintains expected behavior.
\\nTo keep the URL in sync with user inputs, we update the query parameters inside their respective event handlers using setSearchParams
:
// Update search parameter\\nconst handleSearchChange = (\\n e: React.ChangeEvent<HTMLInputElement>\\n) => {\\n const newSearch = e.target.value;\\n setSearchParams((searchParams) => {\\n if (newSearch) {\\n searchParams.set(\'search\', newSearch);\\n } else {\\n searchParams.delete(\'search\');\\n }\\n return searchParams;\\n });\\n};\\n// Update region parameter\\nconst handleRegionChange = (\\n e: React.ChangeEvent<HTMLSelectElement>\\n) => {\\n const newRegion = e.target.value;\\n setSearchParams((searchParams) => {\\n if (newRegion) {\\n searchParams.set(\'region\', newRegion);\\n } else {\\n searchParams.delete(\'region\');\\n }\\n return searchParams;\\n });\\n};\\n\\n
setSearchParams
accepts a callback with the current searchParams
. Modifying and returning it updates the URL and triggers navigation automatically.
By default, every call to setSearchParams
adds a new entry to the browser’s history (e.g., when typing in a search box). This can clutter the back button behavior, making navigation confusing.
To prevent this, pass { replace: true }
as the second argument to setSearchParams
. This updates the URL without adding a new history entry, keeping back navigation clean and predictable:
setSearchParams(\\n (searchParams) => {\\n // ...\\n },\\n { replace: true }\\n);\\n\\n
This way, the URL stays in sync with the current filter state, while the browser history remains clean.
\\nTo avoid repeating setSearchParams
logic when managing multiple query parameters, we can encapsulate the update logic in a reusable helper function:
// Helper function for updating multiple params\\nconst updateParams = (\\n updates: Record<string, string | null>,\\n replace = true\\n) => {\\n setSearchParams(\\n (searchParams) => {\\n Object.entries(updates).forEach(([key, value]) => {\\n value !== null\\n ? searchParams.set(key, value)\\n : searchParams.delete(key);\\n });\\n return searchParams;\\n },\\n { replace }\\n );\\n};\\n\\n
This function centralizes setting, updating, and deleting parameters, keeping the code cleaner and easier to maintain. With updateParams
, we can pass an object of key-value pairs where null
values remove parameters from the URL.
With this helper, event handlers become concise:
\\nconst handleSearchChange = (\\n e: React.ChangeEvent<HTMLInputElement>\\n) => {\\n updateParams({ search: e.target.value || null });\\n};\\nconst handleRegionChange = (\\n e: React.ChangeEvent<HTMLSelectElement>\\n) => {\\n updateParams({ region: e.target.value || null });\\n};\\n\\n
useSearchParams
useSearchParams
updating when using useNavigate
?setSearchParams
already handles navigation internally. It updates the URL and triggers a route transition. So, there’s no need to call useNavigate
separately.
useSearchParams
over window.location.search
?useSearchParams
provides a declarative way to manage query parameters within React. It keeps the UI in sync with the URL without triggering page reloads. In contrast, window.location.search
requires manual parsing, and updating it directly causes a full page reload, breaking the smooth experience expected in a single-page app (SPA).
Managing filter state with useState
may work at first, but as soon as users reload the page, use the back button, or try to share a specific view, its limitations become clear. That’s where useSearchParams
shines.
By syncing the UI state with the URL, we unlock persistence, shareability, and a smoother navigation experience. As demonstrated in the Country Explorer app, integrating query parameters with React Router is not only achievable but also leads to cleaner, more maintainable code and a more resilient user experience.
\\nWhether you’re building filters, search, or pagination, managing state through the URL ensures your app behaves in a modern, reliable, and intuitive way.
\\nIf you found this guide helpful, consider sharing it with others who want to build better React experiences.
\\nView the full project source code on GitHub.
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\napi-service
\\n web-client
\\n shared-ui
\\n Monorepos centralize all codebases in one repository, making cross-service changes atomic, dependency management centralized, and tooling consistent. They work best when teams need tight coordination and shared practices. Polyrepos isolate services into separate repositories for independent deployments, varied tech stacks, and simplified ownership boundaries. Polyrepos are ideal for teams operating autonomously.
\\nWhen choosing between monorepos and polyrepos, the right choice depends on whether you’re optimizing for integration or independence.
\\nMonorepos consolidate all services and libraries into a single versioned repository. This structure allows changes across multiple packages or services to be atomic. A developer updating an API schema and its corresponding client libraries can do so in one commit, tracked in one PR, reviewed by a single team. Ownership boundaries are defined through folder structure, codeowners
files, and service-specific tooling — not separate repositories.
Polyrepos, on the other hand, enforce boundaries at the repository level. Each service, library, or application lives in isolation, often with its own CI pipeline, issue tracker, and versioning strategy. This decoupling simplifies mental models for single-purpose teams, particularly when external contributors or third-party service owners are involved. It also makes it easier to deprecate services without bleeding code across unrelated components.
\\nMonorepos standardize dependency versions using centralized tooling. In a JavaScript or TypeScript workspace, tools like pnpm
or turborepo
resolve internal packages in-place, meaning a shared library used by ten services only needs to be built once. Services can consume the latest commit of the internal library, or use tools like Changesets for version tagging and changelog generation within the repo.
In polyrepos, dependency boundaries are enforced through published packages. You must publish a shared library to a package registry, then bump and install the new version in downstream consumers. This decoupling adds friction but increases reliability, as consumers are insulated from upstream changes unless they explicitly opt in.
\\nCI pipelines in monorepos benefit from tools that support change detection. With Nx, Bazel, or GitHub Actions and custom path filters, it’s possible to run tests only for services impacted by a commit. This enables fast iteration across multiple parts of the system. However, without careful cache management and tooling enforcement, pipeline duration can inflate as the repository grows.
\\nPolyrepos use independent pipelines, so a failing test in Service A won’t block the deployment of Service B. Each pipeline is tailored to the service’s requirements, and CI durations remain short even as the number of repositories scales. But cross-repo integration tests become harder to manage, requiring either a synthetic monorepo for integration CI or a dedicated test orchestration system that can clone and build multiple repositories simultaneously.
\\nMonorepos enable coordinated refactors. You can rename a core type, update all references across services, and ship everything together. Linting, formatting, and testing configurations are shared across the codebase, enforced by a unified toolchain. This consistency enforces architectural guidelines and simplifies onboarding.
\\nPolyrepos make refactoring slower. Each change must be tracked across repositories, reviewed separately, and released in stages. Linter configurations may drift, resulting in style mismatches or outdated tooling. On the other hand, polyrepos support more varied tech stacks. A Go backend team and a React frontend team can evolve independently without the friction of shared config or conflicting dependencies.
\\nMonorepos centralize tooling. A new developer pulls the repository, runs a bootstrapping script, and gains access to every service and library. Tooling can be unified across the repo using a single CLI entry point, whether that’s a shell script, a Makefile, or a framework-specific command. This approach reduces the cost of context switching but increases initial setup time and disk usage. A large codebase like that can be overwhelming for beginners or new team members.
\\nPolyrepos are lighter to clone and faster to understand individually. Tooling lives in each repository and can be tightly scoped. However, inconsistencies accumulate. Two services might use different versions of a CLI tool or different deployment strategies, requiring new hires to internalize multiple practices and switch environments frequently.
\\n\\nIf your teams frequently make cross-service changes, rely on shared libraries, or need tight coordination for feature releases, monorepos eliminate friction. If your teams are independent, ship at their own pace, and value decoupling over tight integration, polyrepos are a better fit. Consider whether your organizational bottlenecks stem from code coordination or team autonomy, then choose accordingly.
\\nTeams prioritizing fast iteration and shared ownership benefit from monorepos. Organizations that need strict isolation, independent versioning, or decentralized teams should lean toward polyrepos. Hybrid approaches, such as grouping related services in a monorepo while keeping others separate, offer a good middle ground.
\\nThe decision between monorepos vs. polyrepos hinges on team size, deployment cadence, and dependency coupling. Neither approach is universally superior; each imposes tradeoffs in tooling, workflow, size, and maintainability. To ensure you get the right fit, evaluating these factors against project requirements ensures the right fit.
\\nThe table below tells you which repository model to pick based on the aspect you’re optimizing for:
\\nAspect | \\nRecommended approach | \\nWhy | \\n
---|---|---|
Cross-service changes | \\nMonorepo | \\nEnables atomic commits and reviews across multiple services or libs | \\n
Independent deployments | \\nPolyrepo | \\nKeeps services decoupled, allowing isolated deployment pipelines | \\n
Dependency management | \\nMonorepo | \\nAvoids publishing shared packages; internal packages are linked locally | \\n
Tooling flexibility | \\nPolyrepo | \\nEach team can define its own toolchain without conflicts | \\n
Shared code refactors | \\nMonorepo | \\nSingle-commit refactors across the entire codebase are possible | \\n
Onboarding developers | \\nPolyrepo | \\nSmaller scope per repo makes initial setup and context easier | \\n
Consistent code quality | \\nMonorepo | \\nShared linters and formatters can be enforced globally | \\n
Scalable CI/CD | \\nPolyrepo | \\nSeparate pipelines prevent bottlenecks and scale independently | \\n
Integration testing | \\nMonorepo | \\nEasier to run full-stack or cross-service tests in a unified setup | \\n
Team autonomy | \\nPolyrepo | \\nTeams own their services fully without repo-to-repo coordination | \\n
Tech stack diversity | \\nPolyrepo | \\nSupports varied languages and frameworks without tight coupling | \\n
Below is a practical monorepo setup using pnpm
, TurboRepo
, and a shared TypeScript config. All services and libraries live in a single version-controlled repository:
repo/\\n├── apps/\\n│ ├── api/\\n│ │ ├── src/\\n│ │ └── package.json\\n│ ├── admin/\\n│ │ ├── src/\\n│ │ └── package.json\\n│ └── web/\\n│ ├── src/\\n│ └── package.json\\n├── packages/\\n│ ├── ui/\\n│ │ ├── components/\\n│ │ └── package.json\\n│ └── config/\\n│ ├── eslint/\\n│ └── tsconfig/\\n├── .github/workflows/ci.yml\\n├── turbo.json\\n├── pnpm-workspace.yaml\\n└── package.json\\n\\n
In this example, dependency resolution is handled through pnpm workspaces
. Internal packages are linked automatically. The build and test orchestration is done via TurboRepo
, which detects changes and scopes commands, while code ownership is enforced via CODEOWNERS
and scoped directories under apps/
. Finally, shared tooling lives in packages/config
, which can include ESLint, Prettier, or TS configs.
To build only changed apps, run the following:
\\nturbo run build --filter=... # Turbo handles caching and change detection\\n\\n
And run this to run all tests:
\\nturbo run test\\n\\n
Each service is its own Git repository with a separate CI pipeline. Internal libraries are published to a registry (e.g., npm or GitHub Packages).
\\napi-service
api-service/\\n├── src/\\n├── package.json\\n├── .github/workflows/ci.yml\\n└── .env\\n\\n
web-client
web-client/\\n├── src/\\n├── package.json\\n├── .github/workflows/ci.yml\\n└── .env\\n\\n
shared-ui
shared-ui/\\n├── components/\\n├── package.json\\n└── .github/workflows/publish.yml\\n\\n
In this code, shared-ui
is versioned and published to npm or a private registry on each main
merge. api-service
and web-client
install @org/shared-ui@^1.2.0
via npm install
. Each repo runs its own pipeline, and PRs are reviewed and merged independently, while releases are tracked using tags and changelogs in each repository.
The decision between monorepos vs. polyrepos depends on whether you’re optimizing for the scale of services or the speed of autonomy. A monorepo setup pays off when coordination between teams is high and tight integration between services or libraries is critical. Polyrepos work better when services evolve independently, and coordination overhead must be avoided.
\\nHeader image source: IconScout
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nEnd-to-end (e2e) testing is essential for ensuring software applications function correctly. However, traditional testing tools like Selenium and Cypress can be difficult to use because they have steep learning curves, fragile tests, and require a lot of maintenance.
\\nAI-powered testing tools like Shortest, Testim, Mabl, and Functionize directly address these problems. They use natural language processing (NLP) and self-healing tests, making it easier to create and maintain tests, which means you don’t need to be a coding expert to use them.
\\nThis article looks at how AI-powered testing tools compare to traditional ones and their main benefits. We’ll take a close look at Shortest, an open source AI-powered testing library, its features, and how it simplifies the testing process.
\\nTraditional end-to-end testing frameworks are important for automated testing, but they have several drawbacks:
\\nAI-driven testing solutions help solve common problems by introducing several key features:
\\nAI-powered end-to-end (e2e) testing tools offer several benefits compared to traditional frameworks:
\\nAI testing tools help reduce the time needed to create and maintain tests. What once took hours or days can now often be done in minutes. You don’t need to write custom code for every test case. Less time is spent debugging tests that break easily, test maintenance becomes automatic when the application changes, and you get immediate feedback on whether tests are valid while you create them.
\\nStudies show that switching to AI-powered tools can cut test creation time by up to 80%. This gives developers more time to focus on building features instead of maintaining tests.
\\nAI testing tools have self-healing features that lower the maintenance load for teams, especially those dealing with fragile test suites that often break during development. When user interface elements change, these tools can automatically spot the changes, use machine learning to find replacement elements, continue running tests without needing manual fixes, and learn from successful changes to improve future performance.
\\nAI testing tools help team members, both technical and non-technical, work better together. They help product managers check that tests accurately reflect user experiences. QA specialists and business stakeholders can create and maintain tests without coding skills. They also allow developers to concentrate on complex issues rather than basic tests. This teamwork ensures that testing aligns with business needs and that everyone shares responsibility for maintaining quality.
\\nAI tools make it easier to scale complex applications. They help teams create tests faster, run them on different devices in the cloud at the same time, choose the right tests intelligently, and reduce test failures. This leads to more reliable results. With this scalability, teams can keep their tests thorough even as applications grow and change. This ensures a smoother development and testing process.
\\nHere’s a simple overview of four popular tools: Shortest, Testim, Mabl, and Functionize, each offering AI-driven end-to-end testing.
\\nShortest is an open source testing framework that uses NLP to understand test descriptions. This makes it easy for anyone, even those with limited technical skills, to create tests. Built on Playwright, Shortest can automate browser tasks with little coding. Shortest is great for teams looking for quick and easy test creation, though using an external API might slow it down.
\\nKey features of Shortest include:
\\nshortest init
command sets up a project quickly, and tests can run in headless or visible modesTestim by Tricentis is a testing platform that speeds up the creation and maintenance of tests for web and mobile apps. It uses machine learning to make tests stable and less flaky.
\\nTestim is ideal for agile teams needing strong regression testing, but its pricing can be a hurdle for smaller projects. Some of its key features include:
\\nMabl is an AI-based test automation platform for web, mobile, and API testing. It focuses on accessibility and collaboration. Mabl is great for teams that want speed and minimal coding, but some of its advanced features may take some time to learn.
\\nKey features of Mabl include:
\\nFunctionize is a high-end testing platform that uses machine learning and computer vision for functional, performance, and visual testing. It features self-healing tests and scalability. Functionize is ideal for large projects that change often, but its costs and Windows-only design might make it less accessible for smaller teams.
\\nKey features of Functionize include:
\\nFeature | \\nShortest | \\nTestim | \\nMabl | \\nFunctionize | \\n
---|---|---|---|---|
Core technology | \\nAI-powered (Anthropic Claude API), built on Playwright | \\nMachine Learning (Smart Locators), Cloud-based | \\nAI-native, low-code, uses ML and computer vision | \\nAI and ML with NLP and computer vision, cloud-based | \\n
Test creation | \\nNatural language descriptions (e.g., “Login with email”) | \\nRecord-and-replay, low-code visual editor, supports coded enhancements | \\nLow-code, AI-powered action words, visual recorder | \\nNLP for scriptless tests, visual test editor | \\n
Ease of use | \\nHigh: Plain English tests, minimal setup with \\nshortest init | \\nHigh: Codeless for non-technical users, intuitive UI | \\nHigh: Codeless focus, accessible for beginners | \\nModerate: Scriptless but may require learning for advanced features | \\n
Self-healing tests | \\nLimited: Relies on AI to adapt to minor changes, no explicit self-healing | \\nYes: Smart Locators auto-update element references | \\nYes: Auto-heals tests for UI/data changes | \\nYes: Strong self-healing with ML-driven updates | \\n
Supported test types | \\nFunctional, API, UI, GitHub 2FA authentication | \\nFunctional, UI, mobile (web/native), visual testing | \\nFunctional, performance, accessibility, API, visual regression | \\nFunctional, performance, load, visual, API | \\n
Integration | \\nGitHub, Mailosaur, basic CI/CD support | \\nCI/CD (Jenkins, Azure DevOps), Jira, Slack, Tricentis Device Cloud | \\nCI/CD (GitHub, Azure, Bitbucket), Postman, Slack | \\nCI/CD (Jenkins, GitLab), third-party apps via API Explorer | \\n
Cross-browser/Device support | \\nYes: Playwright-based, supports multiple browsers | \\nYes: Real browsers, iOS/Android native apps | \\nYes: Web, mobile, cross-browser/devices | \\nYes: Extensive browser/device coverage, parallel testing | \\n
Pricing model | \\nOpen source and depends on Anthropic API usage | \\nFree tier, Essentials/Pro plans, custom pricing | \\nPay-as-you-go, subscription plans, custom pricing | \\nCustom pricing, potentially high for small teams | \\n
Learning curve | \\nLow: Natural language reduces technical barriers | \\nLow: Codeless options, moderate for coded enhancements | \\nLow: Intuitive GUI, low-code approach | \\nModerate: Advanced features require familiarity | \\n
Scalability | \\nModerate: Suitable for small to medium projects, API dependency | \\nHigh: Scales for agile teams, parallel testing | \\nHigh: Cloud-based, scales for continuous testing | \\nHigh: Enterprise-grade, supports large-scale parallel testing | \\n
Unique strength | \\nNatural language simplicity, GitHub 2FA support | \\nSmart Locators for flaky test reduction, mobile native app support | \\nAI-driven test generation, performance insights | \\nVisual AI, comprehensive test coverage for complex apps | \\n
Best for | \\nTeams wanting simple, scriptless E2E testing with minimal coding | \\nAgile teams needing fast test creation and maintenance | \\nDevOps teams prioritizing codeless, continuous testing | \\nEnterprises with complex apps needing robust, scalable testing | \\n
Limitations | \\nExternal API reliance, limited performance/accessibility testing | \\nLess focus on performance, pricing complexity | \\nLimited customization for advanced users, higher cost | \\nHigh cost, Windows-centric design, less flexible for small teams | \\n
In this section, we’ll look at how to test a demo application using Shortest. We’ll cover setup, writing a natural language test, and demonstrate advanced features like test chaining and API testing. Our demo app will be a simple React-based to-do list application using Next.js, which allows users to add, view, and delete tasks. The application will have a frontend UI and a basic API endpoint to fetch tasks.
\\nTo follow along, you can clone the GitHub repo. cd
into the project directory and run npm install && npm run dev
. This app creates a simple UI where users can add and delete tasks, stored in the component’s state, and an API that returns a static list of tasks, simulating a backend response.
To install Shortest, the command below will help you set up the process in a new or existing project:
\\nnpx @antiwork/shortest init\\n\\n
This command will:
\\n@antiwork/shortest
as a dev dependencyshortest.config.ts
file.env.local
file with placeholders.gitignore
to include .env.local
and .shortest/
Now edit shortest.config.ts
to match the application setup:
import type { ShortestConfig } from \\"@antiwork/shortest\\";\\nexport default {\\n headless: false,\\n baseUrl: \\"http://localhost:3000\\",\\n browser: {\\n contextOptions: {\\n ignoreHTTPSErrors: true\\n },\\n },\\n testPattern: \\"**/*.test.ts\\",\\n ai: {\\n provider: \\"anthropic\\",\\n apiKey: process.env.ANTHROPIC_API_KEY\\n },\\n} satisfies ShortestConfig;\\n\\n
Edit .env.local
and add your Anthropic API key (you’ll need to sign up for one here). You can also configure browser behavior using the browser.contextOptions
property in your config file. This will allow you to pass custom Playwright browser context options.
Ensure .env.local
is in .gitignore
to avoid committing sensitive data.
In this section, we’ll explore how to write and execute tests using Shortest. We’ll write a test to verify adding a task to the to-do list.
\\n\\nCreate a test file using the specified pattern in the config file app/todo.test.ts
:
import { shortest } from \'@antiwork/shortest\';\\n\\nshortest(\'Add a new task to the to-do list\', {\\n task: \'Buy groceries\',\\n});\\n\\n
This test instructs Shortest to add a task with the text “Buy groceries” to the list. Now run the test using this command:
\\nnpx shortest app/todo.test.ts \\n\\n
Here’s what happens:
\\n.shortest/
for verificationThe test passes if the task appears in the list. You’ll see the browser perform the actions live, and the console will report success:
\\nFound 1 test file(s)\\n❯ app/todo.test.ts (1)\\n ● Add a new task to the to-do list\\n ✓ passed\\n ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯ \\n\\n Tests 1 passed (1)\\n Duration 10.84s\\n Started at 3:47:47 PM\\n Tokens 0 tokens (≈ $0.00)\\n\\n ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯\\n\\n
Let’s demonstrate test chaining and API testing to showcase Shortest’s advanced capabilities. We’ll chain tests to add a task and then delete it. Edit the app/todo.test.ts
file and execute the test:
import { shortest } from \'@antiwork/shortest\';\\n\\nshortest([\\n \'Add a new task to the to-do list with text Buy groceries\',\\n \'Delete the task with text Buy groceries from the to-do list\',\\n]);\\n\\n
Shortest will make sure that:
\\nNow, let’s test the /api/tasks
endpoint to ensure it returns the expected tasks. Add the code below to the app/todo.test.ts
file and execute the test:
import { shortest } from \'@antiwork/shortest\';\\n\\nconst API_BASE_URI = \'http://localhost:3000/api\';\\n\\n// UI Test Chain\\nshortest([\\n \'Add a new task to the to-do list with text Buy groceries\',\\n \'Delete the task with text Buy groceries from the to-do list\',\\n]);\\n// API Test\\nshortest(`\\n Test the API GET endpoint ${API_BASE_URI}/tasks\\n Expect the response to contain a list of tasks including Sample Task 1\\n`);\\n\\n
Here’s what happens:
\\n/api/tasks
The API test passes if the response contains the expected task. Shortest then logs the API response details, and the test suite completes successfully:
\\n● Test the API GET endpoint http://localhost:3000/api/tasks Expect the response to contain a list of tasks including Sample Task 1\\n ✓ passed\\n ↳ 6,414 tokens (≈ $0.02)\\n ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯ \\n\\n Tests 1 passed (1)\\n Duration 19.65s\\n Started at 4:32:52 PM\\n Tokens 6,414 tokens (≈ $0.02)\\n\\n ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯\\n\\n
Shortest allows you to use callback functions for custom checks and actions after your browser tests run. This feature lets you create more complex test scenarios, like checking your database or making API calls, to see how your application is doing after user interactions.
\\nTo demonstrate callbacks, let’s add a test with a custom assertion to verify the task count after adding a task. Add this to the app/todo.test.ts
file and run the test:
shortest(\'Add a task and verify task count\', {\\n task: \'Learn TypeScript\',\\n}).after(async ({ page }) => {\\n const taskCount = await page.locator(\'li\').count();\\n if (taskCount < 1) {\\n throw new Error(\'No tasks found in the list\');\\n }\\n});\\n\\n
The test confirms the task was added, enhancing reliability with custom logic. What happens:
\\n.after
callback uses Playwright’s API to count <li>
elements (tasks)● Add a task and verify task count\\n ✓ passed\\n ↳ 40,100 tokens (≈ $0.13)\\n ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯ \\n\\n Tests 1 passed (1)\\n Duration 55.93s\\n Started at 4:35:45 PM\\n Tokens 40,100 tokens (≈ $0.13)\\n \\n ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯\\n
Lifecycle hooks let you run code before and after tests. This helps with tasks like setting up the task list, navigating to the app, cleaning the UI state, and more. In your app/todo.test.ts
file, add the code below and run the test:
shortest.beforeAll(async ({ page }) => {\\n await page.goto(\'http://localhost:3000\');\\n // Clear any existing tasks by deleting all visible tasks\\n while (await page.locator(\'button:text(\\"Delete\\")\').count() > 0) {\\n await page.locator(\'button:text(\\"Delete\\")\').first().click();\\n }\\n});\\n\\nshortest.beforeEach(async ({ page }) => {\\n await page.reload();\\n});\\n\\nshortest.afterEach(async ({ page }) => {\\n // Clear the input field to prevent carryover\\n await page.locator(\'input[placeholder=\\"Enter a new task\\"]\').fill(\'\');\\n});\\n\\nshortest.afterAll(async ({ page }) => {\\n await page.close();\\n});\\n\\n
Here are the lifecycle hooks Shortest provides:
\\nbeforeAll
: Executes once before all tests. Ideal for initial setup, such as navigating to the app and clearing any pre-existing tasks by clicking all “Delete” buttonsbeforeEach
: Executes before each test. Useful for resetting the UI state, like reloading the page to clear tasks stored in the component’s stateafterEach
: Executes after each test. Handy for cleanup, such as clearing the input field to ensure no text persists between testsafterAll
: Executes once after all tests. Suitable for final cleanup, like closing the browser to free system resourcesThe hooks ensure a consistent and isolated testing environment. In the code above, each test starts with an empty task list, the input field is cleared post-test, and the browser is closed at the end, preventing state leakage and ensuring reliable test execution.
\\nShortest has many advantages over traditional frameworks like Selenium and Cypress.
\\nTraditional testing tools require long and complicated code for browser tasks, and they don’t have built-in AI support, making them slow and prone to errors. For example, Cypress, while modern, uses a lot of JavaScript. Even though it has started to implement some AI features like automatic test creation for missing UI elements, it is not primarily AI-driven.
\\nShortest’s AI features offer a different approach, allowing testers to write shorter, human-friendly tests and reducing the time and technical skills required to set up tests. For example, a login test in Selenium can take dozens of lines of code to navigate the website and manage waits, while Shortest can achieve this with just one simple sentence. Similarly, Cypress simplifies some tasks, but still needs specific commands like cy.get()
and cy.click()
to do so.
Shortest uses Playwright to provide performance similar to Cypress, but it also integrates with the Claude API to handle complex tasks automatically, such as managing dynamic forms or validating API responses. These are tasks for which traditional frameworks require manual coding.
\\nIt is important to note, however, that Shortest relies on Anthropic’s API, which means it depends on an external service. This is different from Selenium and Cypress, which are self-contained. Another thing to consider is that Shortest’s natural language method might feel less precise for developers who want detailed control over their tests.
\\n\\nHowever, for most end-to-end testing scenarios, Shortest’s ease of use and AI features make it a strong option, especially for teams that value speed and accessibility over deep customization.
\\nAI-driven testing tools like Shortest, Testim, Mabl, and Functionize are changing how we do end-to-end testing. These tools use automation to help teams spend less time on maintenance and allow non-coders to take part in testing, resulting in higher quality software. While traditional tools like Selenium and Cypress are still effective, AI-powered tools offer a strong option for teams that want to improve their testing processes.
\\nAs AI technology advances, we will likely see even more improvements that simplify testing and strengthen software reliability.
\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nThe SOLID principles are a foundational group of guidelines for software design. They’re a bit like an apartment building. Each of the floors of the apartment is designed to support the next, providing stability, adaptability, and longevity. The term SOLID is an acronym for the first letter of each of the five principles:
\\nS — Single Responsibility Principle
\\nO — Open-Closed Principle
\\nL — Liskov Substitution Principle
\\nI — Interface Segregation Principle
\\nD — Dependency Inversion Principle
As important as the SOLID principles are to software system design, these principles aren’t immune to criticism. The Open-Closed Principle (or OCP) is no exception to that debate.
\\nThe OCP isn’t just a theory. Reddit threads buzz with discussions about it, and Robert C. Martin’s (A.K.A. Uncle Bob, the inventor of the SOLID principles) blog posts passionately defend it. Many swear by it, while others warn against overuse.
\\nThis principle dictates that modules should be open to the extension of software entities such as modules, classes, and functions, but closed to the modification of these entities. Invariably, this allows for the character of a module to be extended while its source code remains unaltered.
\\nToday, we’ll be exploring the Open-Closed Principle: from the criticisms around it, its best use cases, and common misapplication.
\\nLet’s begin by answering the question: What does “open for extension, closed for modification” in OCP mean?
\\nImagine you have a box. That box is your application’s core. Over time, you want to add more compartments without opening or altering the box itself. So, what do you do instead? You just attach new sections. That’s extension. In code, you achieve this by adding new modules or classes that interact with the existing system without modifying it.
\\nModification, on the other hand, means tearing open the box and rearranging everything. That is dangerous. The original design might break, and new bugs can be introduced. OCP warns against this. It suggests that we structure our code so that new behavior can be plugged in without risking the old.
\\nThe phrase “open for extension” is very literal. It connotes a situation where a character in a module can be augmented or enhanced. This is simply achieved by adding new code, such as the inclusion of new subclasses or the implementation of new interfaces. Doing this allows the system to accommodate new features without any form of alteration to the existing code.
\\nOnce there is no alteration to the existing source code, it means that the module is closed to modification. Simply put, after a function has been tested and used, it should not be modified to include new functions.
\\nAt its core, OCP simply says, “Add new features by extending the code, not by changing what’s already there.”
\\nThough OCP stands as a foundation for software design, it has sparked heated debates within the developer community. Critics argue that adhering strictly to the OCP can result in convoluted code structures, especially in cases of overuse or misapplication of the principle.
\\nSome developers feel that the conventional application of the OCP (and other SOLID principles) heavily implies inheritance. This, they believe, can lead to tiny classes and one-line functions, which can result in a complicated codebase.
\\nUncle Bob has emphasized the importance of the OCP in the sustainability of clean software architecture. In his blog, he explained that designs that follow the OCP and the Dependency Inversion Principle (DIP) make isolation and separation easier, creating systems that are easier to maintain.
\\nHowever, Uncle Bob also acknowledged that balancing these principles, especially when you have to integrate practices like Test-Driven Development (TDD), can be challenging.
\\n\\nWhen used right, OCP makes systems scalable and maintainable. When misapplied, it leads to endless abstraction layers that complicate code rather than simplifying it.
\\nThe idea behind the Open-Closed Principle didn’t actually originate from Robert C. Martin. It’s been around for over two decades, and has had two major interpretations over that time:
\\nBertrand Meyer is credited with founding the Open-Closed Principle, as used in his 1988 book, “Object-Oriented Software Construction.” In it, Meyer proposed that a module is open if it is available for extension, then new functions and fields can be added to it. Such a module is closed if it is available for use by other modules, indicating that it has a well-defined and stable interface.
\\nIn the 1990s, the open-closed principle was reinterpreted. This new interpretation emphasized the use of abstracted interfaces, leaving room for multiple implementations to be created and substituted for each other polymorphically. This became known as the Polymorphic Open-Closed Principle.
\\nContrary to Meyer’s definition, the Polymorphic interpretation supports inheritance from abstract base classes.
\\nToday, the Open/Closed Principle (OCP) is widely understood to mean that software entities should be open for extension but closed for modification. In practice, this means you should add new functionality by extending existing code rather than changing it, helping preserve the original logic and reducing the risk of introducing bugs.
\\nIt’s important to note that changes in one part of the code can have ripple effects on other components. This can result in a rise in errors and unexpected issues. Also, a little tweak of the code can unravel hidden connections, compromising the system.
\\nWhile the implementation of OCP streamlines the possibility of error infiltration, it’s worth noting that the extension of modules may not always be the best resort. Each new feature included via extension can require extra interfaces, classes, or chains of inheritance. This may end up bloating the code, and these extra layers of abstraction can increase memory utilization, thus slowing down execution. For performance-sensitive applications, this can become a real concern.
\\n\\nThe application of the OCP has yielded many positive outcomes: clean software design, easily editable modules, and safer modification. However, there are a few instances where it can be considered harmful. So, when do you adhere to the OCP, and when should you stray away from it? Here are a couple of use cases to keep in mind.
\\nHere are a few of the ways the OCP has been of tremendous help.
\\nLarge-scale systems are one of the biggest beneficiaries of the open-closed principle. When systems adhere to the principle of OCP, they have the luxury of scalability with a side of ease. When modules are designed in such a way that they can be extended without being modified, new features and components can easily be included without compromising the existing functionality. This allows for smoother integration of new features and parallel development.
\\nPlugin-based architectures see huge benefits from the OCP, too. For these systems, the core application provides points that are extendable for plugins to modify functionalities. This design gives third-party developers a great basis to enhance applications while still maintaining the core of the codebase. In this case, OCP promotes not only flexibility but also customization.
\\nFor example, Integrated Development Environments (IDEs) like Visual Studio Code apply the OCP while using the plugin architectures. Developers can extend their modules to add features like debugging tools or language support. However, the main IDE remains unchanged. This way, the IDE is adaptable to solve the needs of varying developers without compromising its stability.
\\nAPIs can evolve when we apply the OCP. This way, new parameters and endpoints can be added without disrupting the functionality of existing clients. Designing APIs that follow the open-closed principle is crucial to maintaining backward compatibility while allowing for the introduction of new features.
\\nFor example, a web service can separate its endpoints into versions. The new versions introduce enhanced features while the existing version remains operational. With this in play, APIs can evolve to try new possibilities without their consumers feeling like immediate changes are being forced onto them, preserving stability and trust.
\\nWe’ve celebrated the different ways OCP helps. Now let’s evaluate some possible misapplications of the OCP.
\\nOver-engineering happens when developers add too many abstractions to a system, most of which do not solve an existing need. The result is a complex codebase and system that’s difficult to understand and maintain.
\\nFor example, creating overly generic components in React codebases in anticipation of cases that might need these abstractions in the future can strain the ease of maintenance and understanding as the codebase becomes convoluted.
\\nInterface explosion occurs when extra interfaces arise in a codebase due to the overzealous application of the open-closed principle. An example of this is highlighted in the .NET ecosystem, especially with the C# language. Developers sometimes design interfaces for every class to facilitate dependency injection and testing. As valuable as these interfaces are for defining contracts and promoting loose coupling, excessive use can clutter the codebase.
\\nThere are some ideas surrounding the open-closed principle and its applications that are inaccurate. This has further sponsored the misapplication of the Open-Closed Principle. So, let’s highlight these misconceptions and debunk them.
\\nThe open-to-extension and closed-to-modification idea of the OCP is often made to appear as a rule of thumb where, once code is written, there is zero tolerance for modification.
\\nThis is not a very accurate construct of the OCP. Yes, the principle indeed provides a premise for the extension of features, thereby placing a demand on the flexibility of a codebase to evolve into accepting new functionalities. And, yes, it is also true that this is to be designed in such a way that the existing codebase is unaltered. However, this doesn’t mean that modifying the code is forbidden. The idea is to minimize changes, not to put an eternal and immovable stop to changes.
\\nThe Single Responsibility Principle (SRP) emphasizes that a class should have only one reason for change. This means that each class should be saddled with a single responsibility. This is incredibly relevant to the application of the open-closed principle.
\\nWhen focusing on extending code without modifying it, it’s important to apply the Single Responsibility Principle (SRP) alongside the Open/Closed Principle (OCP). Without SRP, you risk piling too much functionality into one class as you extend it, leading to messy, hard-to-maintain code. SRP helps by breaking your code into smaller, more focused classes — each with a single responsibility. Especially in large-scale systems, combining OCP with SRP ensures that your code stays modular, clean, and easier to scale.
\\nThe Dependency Inversion Principle (DIP) says that high-level and low-level modules shouldn’t depend directly on each other — they should depend on abstractions instead. In other words, the implementation details should be driven by abstract interfaces, not the other way around.
\\nWhen paired with the Open/Closed Principle (OCP), DIP helps define how dependencies flow in your code. By using abstractions, you can cleanly separate responsibilities between different layers of your system, making it more modular and easier to maintain.
\\nThat said, it’s possible to apply OCP without fully following DIP. For example, a module might be designed to allow extensions (satisfying OCP), but still depend on concrete implementations rather than abstractions — a violation of DIP. So while applying DIP often leads to satisfying OCP, the reverse isn’t always true.
\\nWe’ve learned all about the OCP from its benefits to its misapplication. Now let’s put that knowledge to use and take a look at how we apply the OCP in different languages.
\\nIn Python, Abstract Base Classes (ABCs) make it possible for you to define common or generic bases for a group of related objects. With the use of these ABCs, you can make sure that new classes adhere to a specific interface. This way, extensions can be made without any form of modification to the existing code.
\\nHere’s a simple example:
\\nfrom abc import ABC, abstractmethod\\n\\nclass Notification(ABC):\\n @abstractmethod\\n def send(self, message: str) -> None:\\n pass\\n\\nclass EmailNotification(Notification):\\n def send(self, message: str) -> None:\\n print(f\\"Sending email: {message}\\")\\n\\nclass SMSNotification(Notification):\\n def send(self, message: str) -> None:\\n print(f\\"Sending SMS: {message}\\")\\n\\ndef notify_user(notification: Notification, message: str):\\n notification.send(message)\\n\\nif __name__ == \\"__main__\\":\\n email = EmailNotification()\\n sms = SMSNotification()\\n\\n notify_user(email, \\"Hello via Email!\\")\\n notify_user(sms, \\"Hello via SMS!\\")\\n\\n
In this Python code, we define an abstract base class called Notification
. Two concrete classes, EmailNotification
and SMSNotification
, implement this interface. Notice that the notify_user
function works with the abstract Notification
type. We can extend this system by adding more types of notifications without touching the existing function. This is OCP in action.
Java is known for its robust type system. Strategy pattern combined with interfaces allows you to define a family of algorithms, encapsulate each one, and make them interchangeable. This complies with the OCP as you can introduce new features without altering the existing codebase.
\\nLet’s see a Java example:
\\n// Define an interface for notifications\\npublic interface Notification {\\n void send(String message);\\n}\\n\\n// Implement email notification\\npublic class EmailNotification implements Notification {\\n @Override\\n public void send(String message) {\\n System.out.println(\\"Sending email: \\" + message);\\n }\\n}\\n\\npublic class SMSNotification implements Notification {\\n @Override\\n public void send(String message) {\\n System.out.println(\\"Sending SMS: \\" + message);\\n }\\n}\\n\\npublic class NotificationService {\\n public void notifyUser(Notification notification, String message) {\\n notification.send(message);\\n }\\n\\n public static void main(String[] args) {\\n NotificationService service = new NotificationService();\\n Notification email = new EmailNotification();\\n Notification sms = new SMSNotification();\\n\\n service.notifyUser(email, \\"Hello via Email!\\");\\n service.notifyUser(sms, \\"Hello via SMS!\\");\\n }\\n}\\n\\n
This Java example mirrors the Python example. It defines a Notification
interface and concrete implementations. The NotificationService
class uses the interface to send messages. With this setup, if you want to add a new type of notification, you simply create a new class that implements Notification
. No existing code needs to change. The system remains robust and flexible.
TypeScript adds types to JavaScript. You can leverage higher-order components (HOCs) to extend features in TypeScript, especially within React applications. With this format, you can add functionality without modifying existing code. This adheres to the OCP by keeping the base component untouched while new functionality is added.
\\nHere’s a TypeScript example:
\\nimport React from \'react\';\\n\\ninterface ButtonProps {\\n label: string;\\n onClick: () => void;\\n}\\n\\nclass Button extends React.Component<ButtonProps> {\\n render() {\\n return (\\n <button onClick={this.props.onClick}>\\n {this.props.label}\\n </button>\\n );\\n }\\n}\\n\\ninterface IconButtonProps extends ButtonProps {\\n icon: string;\\n}\\n\\nclass IconButton extends Button {\\n props: IconButtonProps;\\n\\n render() {\\n return (\\n <button onClick={this.props.onClick}>\\n <i className={`icon-${this.props.icon}`}></i>\\n {this.props.label}\\n </button>\\n );\\n }\\n}\\n\\nconst App = () => {\\n const handleClick = () => alert(\\"Button clicked!\\");\\n\\n return (\\n <div>\\n <Button label=\\"Click Me\\" onClick={handleClick} />\\n <IconButton label=\\"Icon Click\\" icon=\\"star\\" onClick={handleClick} />\\n </div>\\n );\\n};\\n\\nexport default App;\\n\\n
In this example, the Button
component provides basic functionality. The IconButton
extends it, adding an icon. Note how the original Button
remains untouched. New behavior is added through extension, keeping with the OCP guidelines.
In C#, you can inject dependencies at runtime, enabling extensions without modification, and highlighting the application of the open-closed principle. C# embraces OCP by using interfaces and dependency injection (DI). DI frameworks help decouple components.
\\nLet’s review a C# example:
\\nusing System;\\n\\npublic interface INotification\\n{\\n void Send(string message);\\n}\\n\\npublic class EmailNotification : INotification\\n{\\n public void Send(string message)\\n {\\n Console.WriteLine(\\"Sending email: \\" + message);\\n }\\n}\\n\\npublic class SMSNotification : INotification\\n{\\n public void Send(string message)\\n {\\n Console.WriteLine(\\"Sending SMS: \\" + message);\\n }\\n}\\n\\npublic class NotificationService\\n{\\n private readonly INotification _notification;\\n\\n public NotificationService(INotification notification)\\n {\\n _notification = notification;\\n }\\n\\n public void NotifyUser(string message)\\n {\\n _notification.Send(message);\\n }\\n}\\n\\npublic class Program\\n{\\n public static void Main()\\n {\\n INotification email = new EmailNotification();\\n INotification sms = new SMSNotification();\\n\\n NotificationService emailService = new NotificationService(email);\\n NotificationService smsService = new NotificationService(sms);\\n\\n emailService.NotifyUser(\\"Hello via Email!\\");\\n smsService.NotifyUser(\\"Hello via SMS!\\");\\n }\\n}\\n\\n
This C# snippet demonstrates how dependency injection works. The NotificationService
takes an INotification
in its constructor. This means you can pass in any implementation of the interface. The code remains untouched when you add new notification methods. This pattern is widely used in enterprise environments.
Applying the open-closed principle is not about automatically avoiding modification. It is more about being strategic about extension. The overall aim is to introduce change without destabilizing the system. This is only possible when the OCP is applied accurately. So, here are practices that will allow for the best application of the open-closed principle:
\\nIt can be tempting to apply OCP everywhere. But speculative design can be a trap. You can create an elaborate design with abstractions and endpoints that tend to over-deliver on the providence for future needs, leading to clutter.
\\nInstead of doing that, streamline your focus to the real needs of the business, instead of blindly following a principle. If you’re considering using the OCP in your code, you can ask yourself the simple questions below to see if it’s really necessary:
\\nYou enable extension with ease when you interject dependencies rather than hand-coding them. This way, you can easily swap implementations without modifying existing code. This equally supports testability as mock dependencies are easy to interject.
\\nSimilarly, your codebase is likely to maintain its simplicity when you interplay the application of OCP and Interface Segregation Principle (ISP). This encourages the creation of small, focused interfaces rather than large all-in-one interfaces that can induce unwarranted multiple responsibilities in a class. Consequently, abstractions are minimized.
\\nNot every extension needs a new class or interface. Sometimes, a straightforward refactor is the smarter, more readable solution. A few cases where a simple refactor is better than abstraction layers are:
\\nThe Open/Closed Principle is a cornerstone of writing flexible, maintainable code. By encouraging extension without modification, the OCP helps developers build systems that evolve safely over time. But like any principle, OCP isn’t a silver bullet. Overusing it can lead to overly complex or bloated designs.
\\nThe key is balance. Techniques like dependency injection, designing around real business needs (not just abstract principles), applying interface segregation, and refactoring with purpose all help keep OCP grounded in practical value.
\\nUltimately, think of OCP as a tool, not a rigid rule. Its goal isn’t to complicate your codebase, but to make it more adaptable and easier to scale. And sometimes, the smartest move is to favor simplicity. Use OCP where it makes sense, and let maintainability guide your decisions.
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nexpo-audio
\\n expo-maps
(alpha)\\n expo-background-task
\\n Expo SDK 53 has been released, and it brings with it a wealth of new and improved updates for the React Native ecosystem. It is a significant leap forward for developers building cross-platform mobile applications. The new update goes beyond an incremental release, as SDK 53 brings many new changes combined with powerful features to make this one of the most worthwhile updates yet. There is support for React 19 as well as React Native 0.79, giving developers the ability to use many of the latest features and improvements from the main frameworks.
\\nHere are some of the biggest highlights:
\\nexpo-audio
module and the alpha release of expo-maps
expo-background-task
and enhancements to Expo Router v5These new updates bring improvements to the tooling with much faster Android build times as well as a more streamlined TestFlight distribution when using EAS Build.
\\nUpgrading software can sometimes be a challenging task. However, there are many benefits that come with SDK 53, including better app performance and an enhanced developer experience due to the more modern tooling and APIs.
\\nIt’s clear these new capabilities will not only lead to a better experience for your users but also ensure that your projects remain at the cutting edge within the React Native ecosystem.
\\nIn this article, we’ll go through a practical step-by-step checklist designed to guide you through the process of upgrading to the latest SDK. Regardless of whether you are upgrading an existing application or starting with a fresh project, this roadmap will provide you with a strategy to confidently navigate the changes and use all of the new features effectively.
\\nSo let’s get started as we go through this checklist of 14 steps to mastering Expo SDK 53!
\\nThis table showcases some of the most significant differences and advancements in Expo SDK 53 compared to the previous versions:
\\nFeature Area | \\nPrevious SDKs ( SDK 52 and older) | \\nExpo SDK 53 | \\nKey Benefits | \\n
---|---|---|---|
React Version | \\nUsually React 18.x | \\nReact 19 (with React Native 0.79) | \\nAccess to new React features like <Suspense> for data fetching, use() Hook, Actions | \\n
Audio Handling | \\nexpo-av (Audio component) | \\nexpo-audio (Stable Release) — Recommended for new implementations | \\nMore reliable, performant, easier-to-use, and robust API for audio playback and recording | \\n
Mapping Solutions | \\nReliance on third-party libraries (react-native-maps) or limited Expo libraries | \\nexpo-maps (Alpha Release) — Wraps native Google Maps (Android) & Apple Maps (iOS 17+) | \\nA modern, Expo-maintained solution for common map use cases, built with Jetpack Compose & SwiftUI | \\n
Background Tasks | \\nexpo-background-fetch for simple periodic tasks | \\nexpo-background-task — More robust and flexible API for managing background operations | \\nBetter support for complex background tasks like data syncing, downloads, or routine maintenance | \\n
Android Edge-to-Edge | \\nManual configuration is usually required; it is not the default | \\nStreamlined Edge-to-Edge Display — Default in new projects and Expo Go; opt-in for existing projects | \\nEasier implementation of modern, immersive Android UIs that draw under system bars | \\n
Bundle Analysis | \\nExpo Atlas was experimental; reliance on other tools like webpack-bundle-analyzer for web | \\nExpo Atlas (Stable Release) — EXPO_ATLAS=1 npx expo start | \\nIntegrated, stable tool for visualizing JS bundle composition, aiding in app size optimization | \\n
Dependency Resolution (Metro) | \\nLess strict enforcement of package.json “exports” | \\nStricter package.json:”exports” enforcement by default — Via Metro in RN 0.79+ | \\nAligns with modern Node.js/npm standards, and can deal with library compatibility issues much better | \\n
TestFlight Distribution (EAS) | \\nStandard EAS Submit process | \\nSimplified development build to TestFlight — New workflow using distribution: “store” and npx testflight | \\nEasier and faster distribution of development builds to iOS testers via TestFlight | \\n
Local Android Builds | \\nStandard compilation of all modules | \\nPrebuilt Expo Modules for Android — Enabled by default for faster local builds | \\nPotential reduction in local Android build times (up to 25% reported) by using precompiled standard modules | \\n
Be aware that this is a high-level overview, and each of these features has more detailed specifications. The impact will vary depending on the project and how you implement the capabilities.
\\nKeep reading this post to gain a much clearer view of these capabilities!
\\nThe first step is to upgrade your project codebase to the latest version. Expo SDK 53 has many new changes. You must be aware that it’s still new, which means that there are probably going to be some bugs and breaking changes, which you can read about on their GitHub Issues page.
\\nTake a look at the initial steps for upgrading your project and follow along. This will ensure that there is a smooth upgrade phase.
\\nWe first must upgrade our project’s Expo SDK version. This can be done fairly simply using the Expo SDK command-line tool, which automates most of this process. This, of course, includes installing the right version and the correct dependencies.
\\nBefore you run the upgrade, make sure that your project’s Git working directory is clean. You can do this by committing or stashing any changes. If you have yet to initiate the project as a Git repo, then you don’t need to worry about this step. Just run the following command in your project’s folder to begin the upgrade process:
\\nnpx expo install expo@latest\\n\\n
Alternatively, you can use these commands if you want more control over the version you’re installing:
\\n# Install a specific SDK version (for example, SDK 53)\\nnpx expo install expo@^53.0.0\\n\\n# Upgrade all dependencies to match the installed SDK version\\nnpx expo install --fix\\n\\n
With these commands, you are able to update the expo
package and install a compatible version of other Expo libraries such as expo-router
and expo-font
, which are found in your package.json
file. Make sure that you pay attention to any warnings or instructions. You can learn more by following the official upgrading expo sdk walkthrough guide.
If your project is a few versions behind the most recent release, then I’d recommend an incremental update (for example, going from 51 > 52 > 53). This can help isolate any potential issues with past versions.
\\n\\nSDK 53 is able to integrate with React Native 0.79, which uses the Metro bundler settings that enforce package.json
exports
much more strictly. This is seen as a very positive step for the ecosystem. However, it can sometimes cause compatibility issues with other third-party libraries, which have yet to adopt this type of standard.
When the upgrade command has completed its task, run the npx expo-doctor
command to check for issues, including those related to dependency mismatches. Any warnings about peer dependencies or packages are likely to be flagged by the doctor.
If you encounter any errors with packages you have installed, it’s always a good idea to check the libraries repository for updates and fixes that are compatible with Expo SDK 53 and React Native 0.79. You might need to update these packages manually with the usual npm command npm install package-name@latest
or wait for a patch from the library maintainer. Searching the Expo forums, GitHub issues, and StackOverflow is another way to potentially resolve issues that other users might have found.
The New Architecture of React Native (Fabric for the UI and TurboModules for native modules) is opt-in across all projects by default. The architecture offers better performance and more fluid communication between JavaScript and native code.
\\nPreviously, communication relied on a system called the “bridge,” which acts as a translator as it sends and receives messages. However, the bridge was a bottleneck. Especially when handling many tasks at the same time, it often caused slower app performance and response times. The New Architecture eliminates the bridge in favour of a more direct approach, eliminating those delays and delivering faster, more responsive apps.
\\nYour project will be using this New Architecture going forward, unless you choose to opt out of it. If you happen to encounter any breaking changes or crashes, then it’s wise to temporarily disable it. Not all dependencies are going to be compatible yet.
\\nTo disable the New Architecture, add or modify it by changing the value for the newArchEnabled
property for the expo
object in the app.json
or app.config.js
file. This process is shown in the code example:
// In app.json or app.config.js\\n \\"expo\\": {\\n \\"name\\": \\"my-app\\",\\n \\"slug\\": \\"my-app\\",\\n \\"version\\": \\"1.0.0\\",\\n \\"orientation\\": \\"portrait\\",\\n \\"icon\\": \\"./assets/images/icon.png\\",\\n \\"scheme\\": \\"myapp\\",\\n \\"userInterfaceStyle\\": \\"automatic\\",\\n \\"newArchEnabled\\": true, // change this value to false\\n }\\n\\n
The New Architecture is definitely the future; however, the legacy bridge architecture is still the most understood. You can always opt out and use it as a fallback when encountering breaking changes, but it’s worth noting that the new architecture is the long-term choice.
\\n\\nFor more information on disabling it, take a look at the official documentation page on Expo New Architecture Guide – Disabling.
\\nWith the latest SDK 53 update, support is now available for edge-to-edge displays on Android devices. This means that it’s now possible to have the app’s UI draw beneath the system status and navigation bars.
\\nThis provides a more modern and immersive look. Going forward, it is the default behaviour in new projects created with SDK53 as well as the Expo Go client application.
\\nLike before, there is the option to opt in or opt out of this new feature. This behaviour can be controlled by using the edgeToEdgeEnabled
in your app.json
or app.config.js
as shown in this code snippet:
// In app.json or app.config.js - Example toggling edge-to-edge\\n \\"android\\": {\\n \\"adaptiveIcon\\": {\\n \\"foregroundImage\\": \\"./assets/images/adaptive-icon.png\\",\\n \\"backgroundColor\\": \\"#ffffff\\"\\n },\\n \\"edgeToEdgeEnabled\\": true // Use true or false\\n },\\n\\n
When testing on Android devices, you should now see how backgrounds or full-screen images extend under the systems bars when it is enabled. With this new change, be mindful that interactive elements are not blocked.
\\nHave a read about Edge-to-Edge Display Now Streamlined in this official Expo blog post and take a look at this example on Display content edge-to-edge in views to see how the change works on Android.
\\nCompleting this first phase initial setup ensures that you are ready for the next phase, as we explore the new APIS and features available.
\\nThe new release of SDK 53 means there are several big updates and many new features for the Expo ecosystem. In this upcoming section, we will go through some of them and explore how these exciting changes can enhance your project.
\\nexpo-audio
Go over your current audio implementation and replace the existing usage of the expo-av
setup with the new and improved expo-audio
library. This is a necessary migration path for handling audio in Expo.
With expo-audio
, projects now gain a much improved and more reliable foundation. It offers better performance characteristics and a modern and intuitive API design. Moving from expo-av
to expo-audio
is the recommended approach for audio in your projects.
Below, you can see a comparison between the old and new approach in these code snippets:
\\nexpo-av
(Older approach):import ParallaxScrollView from \'@/components/ParallaxScrollView\';\\nimport { Audio } from \'expo-av\';\\nimport { Image } from \'expo-image\';\\nimport { Button, StyleSheet, View } from \'react-native\';\\n\\nexport default function HomeScreen() {\\n async function playSoundAv() {\\n const { sound } = await Audio.Sound.createAsync(\\n require(\'@/assets/sounds/sound.mp3\')\\n );\\n await sound.playAsync();\\n }\\n\\n return (\\n <ParallaxScrollView\\n headerBackgroundColor={{ light: \'#A1CEDC\', dark: \'#1D3D47\' }}\\n headerImage={\\n <Image\\n source={require(\'@/assets/images/partial-react-logo.png\')}\\n style={styles.reactLogo}\\n />\\n }\\n >\\n <View>\\n <Button title=\\"Play Sound\\" onPress={playSoundAv} />\\n </View>\\n </ParallaxScrollView>\\n );\\n}\\n\\nconst styles = StyleSheet.create({\\n reactLogo: {\\n height: 178,\\n width: 290,\\n bottom: 0,\\n left: 0,\\n position: \'absolute\',\\n },\\n});\\n\\n
expo-audio
(recommended):import ParallaxScrollView from \'@/components/ParallaxScrollView\';\\nimport { useAudioPlayer } from \'expo-audio\';\\nimport { Image } from \'expo-image\';\\nimport { Button, StyleSheet, View } from \'react-native\';\\n\\nconst audioSource = require(\'@/assets/sounds/sound.mp3\');\\n\\nexport default function HomeScreen() {\\n const player = useAudioPlayer(audioSource);\\n\\n return (\\n <ParallaxScrollView\\n headerBackgroundColor={{ light: \'#A1CEDC\', dark: \'#1D3D47\' }}\\n headerImage={\\n <Image\\n source={require(\'@/assets/images/partial-react-logo.png\')}\\n style={styles.reactLogo}\\n />\\n }\\n >\\n <View>\\n <Button title=\\"Play Sound\\" onPress={() => player.play()} />\\n </View>\\n </ParallaxScrollView>\\n );\\n}\\n\\nconst styles = StyleSheet.create({\\n reactLogo: {\\n height: 178,\\n width: 290,\\n bottom: 0,\\n left: 0,\\n position: \'absolute\',\\n },\\n});\\n\\n
With the new library, it’s become even easier to create projects that require audio playback for music, podcasts, games, or anything you can imagine.
\\nexpo-maps
(alpha)The new map library is impressive and brings many new features. For compatibility reasons, you need to be using Android or iOS 17+. The library is still in alpha, so there could be many more upcoming changes.
\\nexpo-maps
aims to provide a unified, modern, and stable JavaScript interface over native map components. On Android, it uses Google Maps. Meanwhile on iOS, expo-maps
makes use of Apple Maps, most notably the SwiftUI-based APIs, which are available from iOS 17 onwards.
Modern SwiftUI map components are the reason why iOS 17+ is needed with this release. Support for Google Maps on iOS is not on the roadmap for this library, as its main focus is to be a wrapper for the platform’s native map solution.
\\nA basic map view example can be seen here:
\\nimport { AppleMaps, GoogleMaps } from \'expo-maps\';\\nimport { Platform, Text } from \'react-native\';\\n\\nexport default function App() {\\n if (Platform.OS === \'ios\') {\\n return <AppleMaps.View style={{ flex: 1 }} />;\\n } else if (Platform.OS === \'android\') {\\n return <GoogleMaps.View style={{ flex: 1 }} />;\\n } else {\\n return <Text>Maps are only available on Android and iOS</Text>;\\n }\\n}\\n\\n
The library is still in alpha, so you’ll have to use development builds to try it. You can learn more on the Expo Maps page.
\\nexpo-background-task
expo-background-fetch
is now legacy and has been replaced by the more flexible expo-background-task
.
The new library is designed to handle more complex and reliable background processing needs. You now can register certain tasks that the operating system can execute, even when your app is minimized and not in active use. Any operations that do not require immediate user interaction can keep data fresh and perform maintenance. Some good use cases include tasks that require data synchronization, cleanup, or update checks.
\\nTake a look at this code snippet to get a feel for a potential use case:
\\nimport * as BackgroundTask from \'expo-background-task\';\\nimport * as TaskManager from \'expo-task-manager\';\\nimport * as Updates from \'expo-updates\';\\nimport { useEffect, useState } from \'react\';\\nimport { Button, StyleSheet, Text, View } from \'react-native\';\\n\\n// Define a task name for our background task\\nconst BACKGROUND_TASK_NAME = \'update-check-task\';\\n\\n// Define the task outside the component to ensure it\'s registered properly\\nTaskManager.defineTask(BACKGROUND_TASK_NAME, async () => {\\n try {\\n console.log(`Background task executed at ${new Date().toISOString()}`);\\n\\n // Check for updates\\n const update = await Updates.checkForUpdateAsync();\\n if (update.isAvailable) {\\n await Updates.fetchUpdateAsync();\\n console.log(\'Update downloaded and ready for next app launch\');\\n }\\n\\n return BackgroundTask.BackgroundTaskResult.Success;\\n } catch (error) {\\n console.error(\'Error in background task:\', error);\\n return BackgroundTask.BackgroundTaskResult.Failed;\\n }\\n});\\n\\nexport default function HomeScreen() {\\n const [isRegistered, setIsRegistered] = useState(false);\\n const [taskStatus, setTaskStatus] = useState(\'Not registered\');\\n\\n // Check if task is registered when component mounts\\n useEffect(() => {\\n checkTaskStatus();\\n }, []);\\n\\n // Check if the task is registered\\n const checkTaskStatus = async () => {\\n try {\\n const registered = await TaskManager.isTaskRegisteredAsync(BACKGROUND_TASK_NAME);\\n setIsRegistered(registered);\\n setTaskStatus(registered ? \'Registered\' : \'Not registered\');\\n } catch (error) {\\n console.error(\'Error checking task status:\', error);\\n }\\n };\\n\\n // Register the background task\\n const registerTask = async () => {\\n try {\\n await BackgroundTask.registerTaskAsync(BACKGROUND_TASK_NAME, {\\n minimumInterval: 60 * 15, // 15 minutes\\n });\\n\\n setIsRegistered(true);\\n setTaskStatus(\'Task registered successfully\');\\n } catch (error: any) {\\n console.error(\'Error registering task:\', error);\\n }\\n };\\n\\n // Unregister the background task\\n const unregisterTask = async () => {\\n try {\\n await BackgroundTask.unregisterTaskAsync(BACKGROUND_TASK_NAME);\\n setIsRegistered(false);\\n setTaskStatus(\'Task unregistered\');\\n } catch (error: any) {\\n console.error(\'Error unregistering task:\', error);\\n }\\n };\\n\\n return (\\n <View style={styles.container}>\\n <View style={{height: 40}}></View>\\n\\n <View style={styles.card}>\\n <Text>Status: {taskStatus}</Text>\\n\\n <View style={styles.buttonContainer}>\\n {!isRegistered ? (\\n <Button\\n title=\\"Register Background Task\\"\\n onPress={registerTask}\\n />\\n ) : (\\n <Button\\n title=\\"Unregister Background Task\\"\\n onPress={unregisterTask}\\n />\\n )}\\n </View>\\n </View>\\n\\n <View style={styles.card}>\\n <Text style={styles.infoTitle}>How it works:</Text>\\n <Text>1. Register the task to run in the background</Text>\\n <Text>2. Background task will check for updates</Text>\\n <Text>3. Task runs automatically when app is backgrounded</Text>\\n </View>\\n </View>\\n );\\n}\\n\\nconst styles = StyleSheet.create({\\n container: {\\n padding: 16,\\n flex: 1,\\n },\\n title: {\\n fontSize: 20,\\n fontWeight: \'bold\',\\n marginBottom: 20,\\n textAlign: \'center\',\\n },\\n card: {\\n backgroundColor: \'#f5f5f5\',\\n padding: 16,\\n borderRadius: 8,\\n marginBottom: 16,\\n },\\n buttonContainer: {\\n marginTop: 16,\\n },\\n infoTitle: {\\n fontWeight: \'bold\',\\n marginBottom: 8,\\n },\\n});\\n\\n
Just to be clear, Background Task
functionality is not available in Expo Go; you’ll need to use a development build to avoid the limitations. You can take a look at the Expo Development Client documentation to learn more.
The latest version of Expo Router has new features like build-time redirects/rewrites and Protected Routes. With Expo Router v5, users can make use of some powerful enhancements for routing and navigation.
\\nBuild-time redirects and rewrites offer more efficient ways to manage URL changes, improvements to SEO when handling legacy links, and a more logical method to structure an app’s navigation without complex client-side logic. When using protected routes, it becomes simpler to implement authentication flows. A declarative method helps guard routes and automatically redirects unauthenticated users to a different screen.
\\nAs always, test your navigation setup before upgrading to avoid unforeseen breaking changes. Expo SDK 53 can support build-time redirects by using the expo-router
plugin. Configuring redirects can be accomplished by using the source
and destination
keys within the redirects
array in your configuration file.
Check out this example code:
\\n{\\n \\"expo\\": {\\n \\"plugins\\": [\\n [\\n \\"expo-router\\",\\n {\\n \\"redirects\\": [\\n {\\n \\"source\\": \\"/old-page\\", // The original path you want to redirect from.\\n \\"destination\\": \\"/new-feature\\" // The new path you want to redirect to.\\n }\\n ]\\n }\\n ]\\n ]\\n }\\n}\\n\\n
Expo Router v5 also provides a more reliable way to support protected routes. You are able to use the Stack.Protected component inside your layout files to render screens based on their authentication status.
\\nCheck out this example:
\\nimport { Stack } from \'expo-router\';\\n\\nconst isLoggedIn = true; // Replace with your actual authentication logic\\n\\nexport default function AppLayout() {\\n return (\\n <Stack>\\n <Stack.Protected guard={!isLoggedIn}>\\n <Stack.Screen name=\\"login\\" />\\n </Stack.Protected>\\n\\n <Stack.Protected guard={isLoggedIn}>\\n <Stack.Screen name=\\"private\\" />\\n </Stack.Protected>\\n </Stack>\\n );\\n}\\n\\n
In this setup:
\\nisLoggedIn
is false
, the login
screen is accessibleisLoggedIn
is true
, the private
screen is accessibleThis approach allows you to control access to different parts of your application based on a user’s authentication status.
\\nIt’s also possible to implement authentication logic within _layout.js
or _layout.tsx
files using context providers as well as the Redirect
component from expo-router
. With this method, you can utilize dynamic redirection based on a user’s authentication status, too. These features can improve your app’s navigation structure, optimize SEO transitions, secure sections, and help handle deep links more effectively.
SDK 53 is shipping with React 19, so it’s now the perfect time to utilize new features, especially like <Suspense>
, which manages loading states, and the use()
Hook, which can handle promises or contexts more effectively.
React 19 introduces several features that can simplify asynchronous logic and rendering patterns. <Suspense>
allows you to define a loading fallback UI for components that are in the process of waiting for data — or any other type of asynchronous operation
The new use()
Hook provides a more direct and cleaner way to hold the value of a promise or context directly inside your component’s render logic. This could potentially replace patterns involving useEffect
or context consumers in some specific use cases.
As an added bonus, these features can enable cleaner code and better user experiences, as they make loading states more seamless inside the component tree. To learn more, read the official React v19 documentation.
\\nHere’s a code snippet example that uses the use()
Hook with <Suspense>
:
import React, { createContext, Suspense, use, useState } from \'react\';\\nimport { ActivityIndicator, Button, StyleSheet, Text, View } from \'react-native\';\\n\\nconst styles = StyleSheet.create({\\n container: {\\n flex: 1,\\n padding: 20,\\n justifyContent: \'center\',\\n alignItems: \'center\',\\n },\\n title: {\\n fontSize: 24,\\n fontWeight: \'bold\',\\n marginBottom: 20,\\n },\\n text: {\\n fontSize: 16,\\n },\\n card: {\\n backgroundColor: \'white\',\\n borderRadius: 8,\\n padding: 16,\\n marginTop: 20,\\n width: \'100%\',\\n alignItems: \'center\',\\n },\\n heading: {\\n fontSize: 18,\\n fontWeight: \'bold\',\\n },\\n loadingContainer: {\\n padding: 20,\\n alignItems: \'center\',\\n },\\n});\\n\\nasync function fetchData() {\\n console.log(\'Fetching data...\');\\n await new Promise(resolve => setTimeout(resolve, 2000));\\n return { message: \'Data loaded successfully!\' };\\n}\\n\\ntype Theme = \'light\' | \'dark\';\\nconst ThemeContext = createContext<{theme: Theme; toggleTheme: () => void}>({theme: \'light\', toggleTheme: () => {}});\\n\\nfunction DataDisplay() {\\n const data = use(fetchData());\\n\\n return (\\n <View style={styles.card}>\\n <Text style={styles.heading}>{data.message}</Text>\\n </View>\\n );\\n}\\n\\nexport default function HomeScreen() {\\n const [theme, setTheme] = useState<Theme>(\'light\');\\n\\n const toggleTheme = () => {\\n setTheme(current => current === \'light\' ? \'dark\' : \'light\');\\n };\\n\\n const containerStyle = {\\n ...styles.container,\\n backgroundColor: theme === \'light\' ? \'#f5f5f5\' : \'#222\',\\n };\\n\\n const textStyle = {\\n ...styles.text,\\n color: theme === \'light\' ? \'#000\' : \'#fff\',\\n };\\n\\n return (\\n <ThemeContext.Provider value={{theme, toggleTheme}}>\\n <View style={containerStyle}>\\n <Text style={[styles.title, textStyle]}>React 19 Demo</Text>\\n\\n <Button\\n title={`Switch to ${theme === \'light\' ? \'Dark\' : \'Light\'} Theme`}\\n onPress={toggleTheme}\\n />\\n\\n <Suspense fallback={\\n <View style={styles.loadingContainer}>\\n <ActivityIndicator size=\\"large\\" color={theme === \'light\' ? \'#0066ff\' : \'#ffffff\'} />\\n <Text style={textStyle}>Loading data...</Text>\\n </View>\\n }>\\n <DataDisplay />\\n </Suspense>\\n </View>\\n </ThemeContext.Provider>\\n );\\n}\\n\\n
These React 19 additions make it much easier to simplify data fetching and loading state management in components and display fallback UI. In contrast, lazy-loaded components and resources are fetched, which leads to cleaner consumption of asynchronous context values and improves the perceived performance of your application by better handling loading states.
\\nNow that your Expo SDK 53 project has been upgraded and the initial setup and configuration are complete, it’s time to adjust the development workflow and make sure that your app is stable. In these upcoming steps, we’ll go over some strategies for making the best use of the new tooling and creating through testing strategies.
\\nExpo SDK 53 offers better optimization techniques, which speed up local Android build times because they use prebuilt versions of some Expo modules.
\\nThis feature is enabled by default in all new projects, as long as you’ve upgraded. In most cases, you don’t really need to do anything if you want to benefit from them.
\\nBuilds are precompiling some of the core Expo universal modules, so this optimization can reduce time in the compilation stages. Users have noted that Expo can reduce local build time by 25%.
\\nAs you can imagine, this is very beneficial. However, if you suspect an issue related to these prebuilt modules or need to compile them from source for debugging reasons, you can always opt-out ( just like with the other features mentioned in this guide).
\\nTo do this, you would configure the Expo autolinking options in your package.json
for buildFromSource:
, as seen in the code example:
{\\n \\"name\\": \\"opt-out-example\\",\\n \\"dependencies\\": {},\\n \\"expo\\": {\\n \\"autolinking\\": {\\n \\"android\\": {\\n \\"buildFromSource\\": [\\".*\\"]\\n }\\n }\\n }\\n}\\n\\n
Understanding the elements of your bundle size is crucial to optimizing load times and overall performance.
\\nExpo Atlas, now stable in SDK 53, provides an excellent way to visualize your JavaScript bundle.
\\nTo use Expo Atlas, you must start your development server with the EXPO_ATLAS=1
environment variable as seen here:
EXPO_ATLAS=1 npx expo start\\n\\n
You can open Atlas through the More Tools option in the CL, using the shift+m
key command, or by opening http://localhost:8081/expo/atlas in a browser. After launching your app on Android, iOS, or the web, Atlas provides all the information you need for each platform.
Expo Atlas allows you to see what packages and modules are contributing the most to your bundle size. The visual breakdown will help you identify large dependencies that can be trimmed, opportunities for code splitting, or areas in your code that can be better optimized to cut file size.
\\nTake a look at what the Expo Atlas web interface looks like in this example:
Expo Application Services (EAS) is a fantastic way to simplify the build and submit process for applications. With SDK 53, there is an improved workflow for pushing the Development Builds straight to the TestFlight testers, streamlining the testing loop.
\\nYou can configure your eas.json
to allow development builds to be submitted to the store. You can then use the npx testflight
command to complete the process.
To begin with, you can use the eas.json
file to ensure your development build profile has \\"distribution\\": \\"store\\"
. Then you can add a new submit profile specifically for these development client submissions, as shown here:
// In eas.json\\n{\\n \\"build\\": {\\n \\"development\\": {\\n \\"distribution\\": \\"store\\" // Allows this build to be submitted\\n }\\n },\\n \\"submit\\": {\\n \\"production\\": {},\\n \\"development_testflight\\": {\\n // New profile for submitting dev builds to TestFlight\\n \\"buildProfile\\": \\"development\\" // Specifies which build profile to use\\n }\\n }\\n}\\n\\n
After successfully building your development client with EAS Build, you can then submit it using the following command seen below:
\\nnpx testflight --profile development_testflight\\n\\n
This approach makes it much easier to distribute development builds to your iOS testers via TestFlight, as you can bypass the need for manually registering device UDIDs for ad-hoc distribution of development clients.
\\nWhile Expo Go is excellent for initial configuration and quick experimentation, development builds become more critical the more stable your app becomes, and as you move towards newer SDKs like 53. I always recommend using development builds as your primary environment for testing and debugging while developing with SDK 53.
\\nExpo Go is a pre-built app with a fixed set of native modules. When you use new SDKs or have custom native code, Expo Go won’t be able to fully support all features.
\\nFor example, push notification warnings occur with SDK 53 on Expo Go, which might hide internal native integration issues. Development builds, on the other hand, are manually built versions of your app that utilize your own choice of native dependencies. This allows you to provide an environment significantly closer to your final released app.
\\nExpo Go Repository is excellent for quick preview and limited native customization, but sometimes fails to accurately represent production behaviour for complex apps or new SDK features. Development builds, in contrast, have all your project’s native dependencies and offer a production-like test environment, which is essential for debugging native bugs and compatibility.
\\nUpgrading any SDK, especially one with significant changes like SDK 53, can require deep testing to catch any regressions. It’s important to test all critical user flows and features inside your application after you have upgraded.
\\nDon’t just rely on a quick smoke test; it’s a good idea to pay special attention to areas directly impacted by the SDK 53 changes. This can include features using newly migrated or added modules like expo-audio
, which has a new API. App navigation could also be impacted, especially if you have made changes to utilize the new Expo Router features. Any custom native modules or components should be checked to ensure they behave as expected with the New Architecture and this is even though you might have temporarily opted out. Regardless, it’s good to check.
Check for UI consistency and behaviour on Android, especially if you have implemented or modified the edge-to-edge display settings. Test on a selection of actual iOS and Android devices and OS versions that your app supports to see device-specific problems firsthand. Prioritizing these workflow optimizations and extensive testing methodologies will offer you a high-quality, high-performing application that accesses the full potential of Expo SDK 53.
\\nExpo SDK 53 is a significant release, giving developers a more robust and refined set of tools for building universal native apps. It’s important to ensure that you are getting the most out of these advances, such as the React 19 support, and the New Architecture.
\\nNew APIs like the stable expo-audio
, the alpha of expo-maps
, and robust expo-background-task support
are also worth using, because they can lead to big improvements and performance capabilities inside your app. Expo Router v5 is worth a mention too, alongside the tooling improvements which create potentially quicker local builds of Android. Lastly, let’s not forget Expo Atlas, which enhances bundle analysis inside your app project.
As you gain knowledge in these areas, it becomes simpler for you to develop more modern and efficient apps. It’s true that in some cases, the upgrade process can be frustrating. For example, navigating dependency compatibility, with a much stricter package.json
exports
situation, can result in short-term headaches. However, the long-term reward of being current with improved app performance, streamlined development process, and new feature access is well worth your time and effort.
To continue your learning of Expo SDK 53 and its related tech, check out the Official Expo SDK 53 Changelog and the Expo Upgrade Guide.
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\npass-by-reference
for functions\\n This article explores how to pass functions and structured objects as parameters in TypeScript. It highlights use cases, syntax differences, and practical scenarios where each may be preferred, especially when working with function types, inheritance, excess property checks, and optional fields.
\\nIn JavaScript, functions are considered first-class citizens, which means they can be handled like any other type of variable, including numbers, strings, and arrays. This allows functions to be passed into other functions, returned from functions, and assigned to variables for later use.
\\nThis feature is heavily used in asynchronous code, where functions are often passed into asynchronous functions, often referred to as callbacks. But this can be tricky to use with TypeScript.
\\nTypeScript offers us the fantastic benefits of adding static types and transpilation checks, and it can help us better document what types of variables we expect in our functions. But what happens if we need to pass functions?
\\nIt’s evident that typing these functions is necessary. However, the question arises: how do we type them, and how do we pass a function in TypeScript? In this tutorial, we will explore TypeScript functions, how to pass them as parameters in our apps, and how to pass objects defined by interfaces and type annotations into them.
\\nEditor’s note: This article was updated by Elijah Agbonze in May 2025 to compare passing inline type definitions as parameters with declaring interfaces and passing them, include an explanation of structural typing, and address edge cases related to passing TypeScript functions as parameters.
\\nMost TypeScript developers are familiar with typing simple variables, but constructing a type for a function is a little more complicated.
\\nA function type (note: this link redirects to old TypeScript docs, but it has a much clearer example than the newer ones) is made up of the types of arguments the function accepts and the return type of the function.
\\nWe can illustrate a very simple example to demonstrate this:
\\nconst stringify = (el : any) : string => { return el + \\"\\" } \\nconst numberify = (el : any) : number => { return Number(el) }\\n\\nlet test = stringify;\\ntest = numberify;\\n\\n
If implemented in JavaScript, the above example would work fine and have no issues.
\\nBut when we utilize TypeScript, errors are thrown when we try to transpile our code:
- Type \'(el: any) => number\' is not assignable to type \'(el: any) => string\'.\\n- Type \'number\' is not assignable to type \'string\'.\\n\\n
The error message thrown here is descriptive: the stringify
and numberify
function are not interchangeable. They cannot be assigned interchangeably to the test
variable, as they have conflicting types. The arguments they receive are the same (one argument of type any
), but we receive errors because their return types are different.
We could change the return types here to prove our theory is correct:
\\nconst stringify = (el : any) : number => { return 1 } \\nconst numberify = (el : any) : number => { return Number(el) }\\n\\nlet test = stringify;\\ntest = numberify;\\n\\n
The above code now works as expected. The only difference is that we changed the stringify
function to match the return type of the numberify
function. Indeed, the return
type was breaking this example.
In TypeScript, we can declare a function type with the type
keyword. The type
keyword in TypeScript allows us to specify the shape of data:
type AddOperator = (a: number, b: number) => number;\\n\\n
Here, we define a type alias named AddOperator
using the type
keyword. It represents a function type that takes two parameters (a
and b
) of type number
and returns a value of type number
.
Another way to declare a function type is to use interface syntax. The below Add
interface represents the same function type as the above AddOperator
function type:
// Using interface for function type\\ninterface Add {\\n (a: number, b: number): number;\\n}\\n\\n
Explicitly defining function structures provides a clear understanding of expected inputs and outputs. This enhances code readability, serves as documentation, and simplifies code maintenance.
\\nAnother main advantage of declaring function types is the ability to catch errors at compile time. TypeScript’s static typing ensures that functions adhere to the specified types, preventing runtime errors caused by mismatched parameter types or invalid return types.
\\nIn the example below, the TypeScript compiler will throw an error, indicating the mismatch between the expected and actual return types:
\\n// Function violating the parameter type\\nconst addFn: AddOperator = (a, b) => `${a}${b}`;\\n// Error: Type \'string\' is not assignable to type \'number\'\\n\\n
Declaring function types allows Integrated Development Environments (IDEs) to offer precise autocompletion.
\\nIntelliSense, a powerful feature offered by modern IDEs, provides us with context-aware suggestions. Function types provide explicit information about parameter types, making it easier to understand the expected inputs. As we start typing function names or parameters, IntelliSense utilizes the declared types to suggest valid options, minimizing errors and saving time.
\\n\\npass-by-reference
for functionsIn JavaScript and TypeScript, understanding the concepts of pass-by-value
and pass-by-reference
is crucial for working with functions and manipulating data. Primitive types (such as Boolean, null, undefined, String, and Number) are treated as pass-by-value
, while objects (including arrays and functions) are handled as pass-by-reference
.
When an argument is passed to the function, pass-by-value
means a copy of the variable is created, and any modifications made within the function do not affect the original variable. In the example below, we change the value of the variable a
inside the function, but the value of the variable a
outside isn’t changed as a
is passed into the function with pass-by-value
:
const numberIncrement = (a: number) => {\\n a = a + 1;\\n return a;\\n}\\nlet a = 1;\\nlet b = numberIncrement(a);\\nconsole.log(`pass by value -> a = ${a} b = ${b}`);\\n// pass by value -> a = 1 b = 2\\n\\n
When an object or array argument is passed to a function, it is treated as pass-by-reference
. The argument is copied as a reference, not the object itself. Thus, changes to the argument’s properties inside the function are reflected in the original object. In the example below, we can observe that the change of the array orignalArray
inside the function affects the orignalArray
outside the function:
const arrayIncrement = (arr: number[]): void => {\\n arr.push(99);\\n};\\nconst originalArray: number[] = [1, 2, 3];\\narrayIncrement(originalArray);\\nconsole.log(`pass by ref => ${originalArray}`); // pass by ref => 1,2,3,99\\n\\n
Contrary to some misconceptions, even though the reference to the object is copied for pass-by-reference
, the reference itself is still passed by value. If the object reference is reassigned inside the function, it won’t affect the original object outside the function.
The below example illustrates reassigning an array reference of originalArray
inside the function, and its original object isn’t affected:
const arrayIncrement = (arr: number[]): void => {\\n arr = [...arr, 99];\\n console.log(`arr inside the function => ${arr}`); \\n};\\n//arr inside the function => 1,2,3,99\\nconst originalArray: number[] = [1, 2, 3];\\narrayIncrement(originalArray);\\nconsole.log(`arr outside => ${originalArray}`); // arr outside => 1,2,3\\n\\n
Generics in TypeScript provide a way to write functions that can work with any data type. The following example is sourced from the official TypeScript documentation:
\\nfunction identity<T>(arg: T): T {\\n return arg;\\n}\\nconsole.log(identity(\\"Hello, TypeScript!\\"));\\nconsole.log(identity(99));\\n\\n
Here, the identity
function can accept and return values of any type. This flexibility allows us to write functions that adapt to various data types.
We can create highly reusable and adaptable functions that work with various data types using generics.
\\nLet’s say we want to create a utility function for searching elements in an array based on a specific criterion. Using generics allows the function to work with arrays of various types and accommodate different search criteria:
\\nfunction findElements<T>(arr: T[], filterFn: (item: T) => boolean): T[] {\\n return arr.filter(filterFn);\\n}\\n\\n
Here, we create a generic function named findElements
that takes an array arr
and a filterFn
function as parameters. The filterFn
parameter is a callback function that determines whether an element satisfies a particular criterion, returning a Boolean.
Below are a couple of examples in which we use the function from above to deal with number types, object types, and different search criteria. We use the function to filter odd numbers from an array and inexpensive products from an array of products, demonstrating its flexibility with different data types:
\\nconst arr: number[] = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];\\nconst oddNumbers: number[] = findElements(arr, (num) => num % 2 === 1);\\nconsole.log(\\"Odd Numbers:\\", oddNumbers);\\n\\ninterface Product {\\n name: string;\\n price: number;\\n}\\nconst products: Product[] = [\\n { name: \\"Phone\\", price: 400 },\\n { name: \\"Laptop\\", price: 1000 },\\n { name: \\"Tablet\\", price: 300 }\\n];\\nconst cheapProducts = findElements(products, (p) => p.price < 500);\\nconsole.log(\'cheap products:\', cheapProducts);\\n\\n
The use of generics makes the function highly reusable and adaptable, making it applicable to arrays of primitive types or custom objects without sacrificing type safety.
\\nFunction overloads allow us to provide multiple type signatures for a single function. This is particularly useful when a function can accept different combinations of argument types.
\\nTo use function overload, we must define multiple overload signatures and an implementation. The overload signature outlines the parameter and return types of a function without including an actual implementation body:
\\n// Overload signature\\nfunction greeting(person: string): string;\\nfunction greeting(persons: string[]): string;\\n// Implementation of the function\\nfunction greeting(input: string | string[]): string {\\n if (Array.isArray(input)) {\\n return input.map(greet => `Hello, ${greet}!`).join(\' \');\\n } else {\\n return `Hello, ${input}!`;\\n }\\n}\\n// Consume the function\\nconsole.log(greeting(\'Bob\'));\\nconsole.log(greeting([\'Bob\', \'Peter\', \'Sam\']));\\n\\n
In the above example, we create a function that demonstrates function overloads accepting parameters, either a string or an array of strings. This function, named greeting
, has two overloads to handle these scenarios. The implementation checks whether the input
parameter is a string or an array of strings and performs the appropriate action for each case.
We can leverage generics to create versatile functions that work with various data types. Additionally, function overload is valuable in enhancing parameter flexibility, allowing functions to accept different types while providing clear expectations for each case.
\\nInterestingly, many other languages will create these function types based on the types of arguments, return types, and the number of arguments for the function.
\\nLet’s make one final example to expand on our last working example:
\\nconst stringify = (el : any, el2: number) : number => { return 1 } \\nconst numberify = (el : any) : number => { return Number(el) }\\n\\nlet test = stringify;\\ntest = numberify;\\n\\n
Developers familiar with other languages might think the above function examples aren’t interchangeable, as the two function signatures differ.
\\nThis example throws no errors, though, and it’s legitimate in TypeScript because TypeScript implements what’s referred to as duck typing.
\\nIn duck typing, TypeScript checks if the structure of the assigned function is compatible with the expected type based on the function’s parameters and return type. In this case, both stringify
and numberify
share the same structure: a function that takes one or more parameters (of any type) and returns a number. Despite the difference in the number of parameters between the two functions, TypeScript allows this assignment due to duck typing.
It’s a small note, but it’s important to remember: the number of arguments isn’t utilized in type definitions for functions in TypeScript.
\\nNow that we know precisely how to construct types for our functions, we need to ensure we type the functions that we pass in TypeScript.
\\nLet’s work through a failing example together again:
\\nconst parentFunction = (el : () ) : number => { return el() } \\n\\n
The above example doesn’t work, but it captures what we need.
\\nWe need to pass it to a parent function as a callback, which can be called later. So, what do we need to change here? We need to:
\\nel
function argumentel
function (if it requires them)Upon doing this, our example should now look like this:
\\nconst parentFunction = (el : () => any ) : number => { return el() } \\n\\n
This specific example doesn’t require arguments, but if it did, here is what it would look like:
\\nconst parentFunction = (el : (arg: string) => any ) : number => { return el(\\"Hello :)\\") } \\n\\n
This example is relatively simple to explain the concepts of TypeScript functions easily. However, if we have more complicated types, we may spend a significant amount of time typing everything.
\\nThe community maintains plenty of high-quality open source typings commonly used in TypeScript, called Definitely Typed, which can help us simplify and speed up the typing we need to use.
\\nThis section will look extensively at how object literals are declared and passed as arguments into functions in TypeScript. Depending on your use case, you can decide to declare an object type before passing it or passing it inline:
\\ninterface Product {\\n name: string;\\n price: number;\\n}\\n\\nconst getProductInfo = (product: Product): string => {\\n return `This is a ${product.name} made for durability and sustainability. Going for a whooping $${product.price}. Best offer!`;\\n};\\n\\nconsole.log(getProductInfo({ name: \\"Phone\\", price: 400 }))\\n\\n
TypeScript performs checks on your objects to make sure it doesn’t have any extra properties beyond what was declared in the object type:
\\ninterface Product { \\n name: string; \\n price: number; \\n} \\n\\nconst product1: Product = { \\n name: \\"Phone\\", \\n price: 400, \\n brand: \\"Apple\\" \\n}\\n\\n
The code above will throw an error because brand
is an excess property not declared in the Product
type.
In some cases, TypeScript doesn’t throw an error on an excess property. One of which is explicitly using a variable:
\\nconst productObj = { \\n name: \\"Phone\\", \\n price: 400, \\n brand: \\"Apple\\" \\n} \\n\\nconst product1: Product = productObj; // works fine \\n\\n
Another example of no error on an excess property is by using an assertion:
\\nconst product1 = { \\n name: \\"Phone\\", \\n price: 400, \\n brand: \\"Apple\\" \\n} as Product;\\n\\n
If you want a property that may not be used throughout the whole use of the declared object type, then you should declare it as optional:
\\ninterface Product { \\n name: string; \\n price: number; \\n discount?: number; \\n} \\n\\nconst product1: Product = { \\n name: \\"Phone\\", \\n price: 400, \\n discount: 10 \\n} \\n\\nconst product2: Product = { \\n name: \\"Tablet\\", \\n price: 300 \\n} \\n\\n
When working on objects with optional properties, TypeScript requires you to perform a check for the optional property before using it:
\\nconst getTotalDiscount = (purchasedProducts: Product[]): number => { \\n return purchasedProducts.reduce( \\n (total: number, p: Product) => (p.discount + total), // TypeScript will throw an error \\n 0 \\n ); \\n}; \\n\\nconst getTotalDiscount = (purchasedProducts: Product[]): number => { \\n return purchasedProducts.reduce( \\n (total: number, p: Product) => (p.discount ? p.discount + total : total), // works fine \\n 0 \\n ); \\n}; \\n\\n
Declaring a type/interface is useful for cases where it would be reused across your app or for maintainability. Oftentimes, you will need parameters that you’d have no use for again, and you can simply pass them inline without first declaring a type for them:
\\nconst calculateShippingCost = (item: { \\n amount: number; \\n destination: { international: boolean }; \\n weight: number; \\n}): number => { \\n const baseRate = item.destination.international ? 25 : 5; \\n return item.weight * item.amount * baseRate; \\n};\\n\\nconsole.log(calculateShippingCost({ amount: 500, destination: { international: true }, weight: 30 }));\\n\\n
On the other hand, you may have a type that needs to be reused. In this case, you should declare an interface/type:
\\ninterface User { \\n name: string; \\n age: number; \\n orders: number; \\n itemsInCart: number; \\n} \\n\\nconst greetUser = (user: User): string => { \\n return `Hello ${user.name}, you have ${user.itemsInCart} items in your cart!`; \\n};\\n\\n
Like many comparisons in programming, deciding when to use something is often the question, rather than which to use. This is the same case here.
\\nBoth interfaces and types can be used almost interchangeably. They both serve the same purpose, but there are some subtle differences between the two that can make a difference during your development.
\\nOne of these is how function types are declared. We saw an example of both use cases, and the difference boils down to a :
for interface and =>
for types.
Another case is how they both manage extension. Interfaces do this with the use of the extends
keyword (OOP lovers would feel right at home with this):
interface User { \\n name: string; \\n age: number; \\n}\\n\\ninterface AppUser extends User { \\n orders: number; \\n itemsInCart: number; \\n} \\n\\n
Types, on the other hand, uses intersection (&
):
type User = { \\n name: string; \\n age: number; \\n}; \\n\\ntype AppUser = User & { \\n orders: number; \\n itemsInCart: number; \\n};\\n\\n
This is just an introduction. You may want to look at an in-depth article on extending object-like types in TypeScript.
\\nInterfaces can be declared multiple times, and the TypeScript compiler will automatically merge the declarations. This is a process called Declaration merging, and it is exclusive to interfaces only:
\\ninterface User { \\n name: string; \\n age: number; \\n}; \\n\\ninterface User { \\n orders: number; \\n itemsInCart: number \\n} \\n\\n
Types are more flexible for data complexity and manipulation:
\\ntype ApiResponse<T> = \\n | { status: \\"success\\"; data: T; timestamp: number } \\n | { status: \\"error\\"; error: string; code: number }; \\n\\nconst processResponse = (response: ApiResponse<User[]>): string => { \\n if (response.status === \\"success\\") { \\n // TypeScript knows this is the success branch \\n return `Got ${response.data.length} users`; \\n } else { \\n // TypeScript knows this is the error branch \\n return `Error ${response.code}: ${response.error}`; \\n } \\n};\\n\\n
You’ll find more examples and use cases in our blog, Types vs interfaces in TypeScript.
\\nI hope this article helps you better understand the TypeScript landscape around passing functions and object literals as arguments to other functions. Here is a GitHub gist of all the code examples for testing.
\\nCallbacks typically rely on passing functions to other functions, so you’ll often see heavy use of callbacks in any mature TypeScript codebase.
\\nHappy coding!
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nAPI documentation is more than a technical formality; it’s a make-or-break component of your API’s success. In Merge’s guide to API evaluation criteria, “comprehensive data” ranked as the second most important factor when evaluating an API, right behind consistent data format. That’s no coincidence: clear documentation is what makes an API usable in the first place.
\\nIn this tutorial, we’ll explore why API documentation matters, recent trends in the space, and how to build great docs from scratch using Docusaurus — including writing components for HTTP methods step by step.
\\nAPIs (Application Programming Interfaces) are the backbone of modern software development. Whether you’re building a web app, mobile app, or microservice architecture, chances are you’ll need to consume or expose an API.
\\nBut even the most powerful API is only as useful as its documentation. API docs are the interface to your interface — the bridge that connects your functionality to the developers who want to use it. Done right, documentation lowers the barrier to entry, drives adoption, and reduces frustration. Done poorly (or worse, not at all), it turns your API into a black box.
\\nAt its core, API documentation explains what your API does, how to use it, and what to expect from its endpoints. It often includes guides, reference material, code samples, and tutorials that developers can reference at any point in their integration journey.
\\nHere’s why strong documentation pays off:
\\nLuckily, you don’t have to start from scratch. There are a ton of tools designed to help you generate, manage, and publish great API docs. Some are tightly coupled with specs like OpenAPI; others offer general-purpose flexibility. Here are a few worth considering:
\\nIn the next section, we’ll walk through using Docusaurus to build your own API documentation site — from setting up the framework to writing reusable components for HTTP methods.
\\nAPIs power much of the modern internet, from enabling third-party integrations to supporting full-fledged web services. As they’ve become essential to software development, the way we document APIs has evolved, too. Gone are the days of static, endpoint-only docs. Today, developer expectations are higher, and the tooling is catching up fast.
\\nGreat API docs aren’t just about completeness — they’re about usability. The best ones focus on intuitive navigation, fast search, clean formatting, and real-world use cases. Docs are increasingly treated as part of the product experience, not just a support artifact.
\\nTreat your documentation like source code: version-controlled, pull-requested, and CI/CD-integrated. Using formats like Markdown or MDX, teams can collaborate on docs the same way they collaborate on features — keeping documentation in lockstep with the API it describes.
\\nStatic docs don’t cut it anymore. Developers expect to try out endpoints directly within the documentation using tools like Swagger UI, Redoc, or Postman’s “Run in Postman” button. Interactivity reduces guesswork and speeds up onboarding.
\\nInstead of just listing what an endpoint does, modern docs show how it fits into real workflows. Think code samples in multiple languages, request/response examples, and step-by-step tutorials that help developers solve actual problems.
\\nComplex ideas are easier to grasp with visuals. Diagrams, videos, and interactive elements can go a long way in making your API docs more engaging — and more effective.
\\nManual updates are error-prone and unsustainable. Tools that sync docs with API specs (like OpenAPI), test suites, or code comments help ensure that your docs stay up to date as your API evolves.
\\n\\nAs more dev teams go global, API docs are becoming more tailored, both by user role (e.g., frontend vs. backend) and by region or language. Localization and role-based views help your docs scale with your user base.
\\nIf you’re building an API-first startup, your API is your product, and your documentation is your UX. That means it’s not just a support tool; it’s a key part of your go-to-market strategy.
\\nHere are some best practices for getting it right early.
\\nUse API design-first approaches (like OpenAPI specs) to define contracts before coding. It helps clarify the product vision and gives you a head start on documentation.
\\nDocs shouldn’t be a chore you save for the end. Treat them as a core task — the same developers building the API should help document it.
\\nPut yourself in the shoes of a first-time user. Is the documentation clear? Can you find what you need in under a minute? Make it easy for devs to hit the ground running.
\\nLean on tools that generate or update documentation from your API code or specs to keep things accurate without extra overhead.
\\nTreat your docs like a product: gather user feedback, track pain points, and continuously improve.
\\nUse Git, code reviews, and CI/CD pipelines to manage your documentation, especially if you’re shipping multiple versions of your API.
\\nDocusaurus is a powerful static site generator built by Meta and designed specifically for documentation websites. It’s React-based, which means you get a lot of flexibility in how you customize your site, and it comes with features that make API documentation much easier to manage:
\\nIf you’re looking for full control over your API documentation experience — and want to go beyond what spec-only tools can provide — Docusaurus is a great choice.
\\nTo follow along with the steps below, make sure you have Node.js version 18.0 installed on your machine.
\\napi-doc-site
is the name of your folder, and classic
is the name of the Docusaurus recommended template to get started quickly.docusaurus.config.ts
— The main configuration file for your site/docs
— This is where you’ll put your documentation Markdown or MDX files. Each .md
or .mdx
file in this directory (and its subdirectories) typically becomes a documentation page/src
— Contains non-documentation pages (like the homepage), React components, and custom CSS/static
— For static assets like images, fonts, etc.sidebars.ts
— Defines the structure and order of your documentation sidebarnpm start\\n
http://localhost:3000
.Now that we have our site running, let’s start building our documentation site.
\\nOrganize the documentation folders first. Create a subdirectory within /docs
for your API documentation, e.g., /docs/api
.
In the /api
folder, create a _category_.json
file. This file helps you label or give the api
folder a label. As shown below:
{\\n “label”: “API Tutorial”,\\n “position”: 2\\n}\\n\\n
users.mdx
file in the /api
folderIn this new file, add basic documentation, like so:
\\n---\\nid: users-api\\ntitle: Users API\\nsidebar_label: Users\\n---\\nimport HttpMethod from \'@site/src/components/HttpMethod\'\\n<HttpMethod method=\\"GET\\" /> `/api/users`\\n# Users API Endpoints\\nDocumentation for managing users.\\n---\\n## Get all users\\nThis endpoint retrieves a list of users.\\n\\n**ApiKey:** \\nNo API key required\\n\\n**Content-Type:** \\napplication/json\\n\\n**Request Body:**\\nNo request body\\n\\n#### Headers\\n|Content-Type|Value|\\n|---|---|\\n|Accept-Language||\\n\\n#### Headers\\n|Content-Type|Value|\\n|---|---|\\n|Accept|text/plain|\\n\\nN.B., This is only for illustration purpose. The content of your documentation would be more than this.\\n
At line 6, we have the HTTP method component. Let’s create the component before we proceed.
\\nTo do that, in your src
folder, create a components
folder and add a file for the HTTP method component (e.g., /src/components/httpMethod.tsx
.
Write the code below inside the httpMethod.tsx
file:
import React from \'react\';\\nimport clsx from \'clsx\'; \\nimport styles from \'./HttpMethod.module.css\'; \\n\\nconst methodColors = {\\n GET: \'get\',\\n POST: \'post\',\\n PUT: \'put\',\\n PATCH: \'patch\',\\n DELETE: \'delete\',\\n};\\nfunction HttpMethod({ method }) {\\n const upperMethod = method.toUpperCase();\\n const colorClass = methodColors[upperMethod] || \'default\'; \\n return (\\n <span className={clsx(styles.httpMethod, styles[colorClass])}>\\n {upperMethod}\\n </span>\\n );\\n}\\nexport default HttpMethod;\\n\\n
To style the component, create a corresponding CSS Module file /src/components/httpMethod.module.css
. Write this CSS in it:
.httpMethod {\\n display: inline-block;\\n padding: 0.2em 0.5em;\\n margin-right: 0.5em;\\n border-radius: 4px;\\n font-weight: bold;\\n font-size: 0.9em;\\n color: white; \\n text-transform: uppercase;\\n}\\n\\n\\n.get {\\n background-color: #61affe; \\n}\\n\\n.post {\\n background-color: #49cc90; \\n}\\n\\n.put {\\n background-color: #fca130; \\n}\\n\\n.patch {\\n background-color: #50e3c2; \\n}\\n\\n.delete {\\n background-color: #f93e3e; \\n}\\n\\n.default {\\n background-color: #666; \\n}\\n\\n
Run your site with this command:
\\nnpm start\\n\\n
Your site should look like this:
\\n\\n
At this point, we’ve explored API documentation, trends, importance, developed a basic documentation site, and built out reusable HTTP method components. You can expand upon this depending on your SaaS needs.
\\nGreat API documentation isn’t just a technical checklist item — it’s one of the most powerful tools you have to drive adoption, reduce friction, and earn developer trust. As the ecosystem matures, clarity, interactivity, and developer-first thinking are no longer optional. They’re expected.
\\nFor API-first startups, treating documentation as a core product asset from day one sets the foundation for long-term success. By integrating it into your development workflow and embracing automation, feedback, and usability, you ensure your documentation evolves with your API, rather than after it.
\\nTools like Docusaurus make it easier to meet these expectations. With built-in support for Markdown, versioning, and React-based customization, you can build docs that not only inform but also guide and inspire. Whether it’s explaining core concepts or walking developers through complex integrations, Docusaurus lets you create documentation that’s as thoughtful and user-friendly as the API behind it.
\\nIn the end, well-crafted documentation is more than a reference. And when done right, it’s one of the best investments you can make in your product’s success.
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nUnderstanding the differences between profit and cost centers in engineering teams is essential for developers. The two have fundamentally different mentalities in the way they operate. As a result, developers must know the difference to determine which would work better for them.
\\nThis article examines the differences between profit vs. cost center organizations, including the pros and cons. It will explore differences in team structures, ownership models, and how engineers are assigned to projects, helping you understand the trade-offs between the two.
\\nLet’s begin with some definitions. A profit center is a team or organization that is considered to generate revenue for the company. On the other hand, a cost center is a team that costs money for the company and does not directly contribute to revenue.
\\nFor example, legal, accounting, and HR are technically all cost centers. They are not bringing in money, but are still essential to the company. Without them, the company couldn’t hire or train employees or remain compliant with tax laws. They typically have budget limits and have to track their expenses carefully.
\\nOn the other hand, an engineering team supporting a credit card application would be a profit center. This team supports a product that brings revenue and customers to the company. Some engineering teams might fall under the umbrella of cost centers – the infrastructure team or tooling – but the whole tech department would be viewed as a profit center.
\\nNowadays, technical companies consider their engineering departments to be profit centers. They have created a product and treat it as the main revenue generator. When companies have this product-driven tech mentality, investment in the engineering department is seen as beneficial, as it would lead to innovation and revenue.
\\nThe structure of the engineering department is typically divided into squads focused on a specific scope of the product. For instance, at a healthcare company I worked at, we had dedicated squads for search, appointment booking, authentication, front-end design, accessibility, marketing, and SEO optimization.
\\nIn product-driven tech companies, every part of the product is owned by a squad. Each of them would have five to six engineers and an engineering manager. Some engineering managers may manage multiple squads. The same reality can also be applied to the designer and the product owner. Each of these squads would also have its roadmaps filled with projects to either expand or improve the scope of the product.
\\nThis structure empowers teams with clear ownership, incentivizing engineers to continuously improve their domain. For example, I once worked on a team where our front end, for SEO reasons, was half in Slim and half in React.
\\nFor one quarter, one of my deliverables was a migration plan to move everything to React. I ended up presenting this plan to the team and CTO, and it was added to the roadmap. We could invest in these kinds of initiatives because addressing technical debts would help us in the long run.
\\nHowever, this model isn’t without drawbacks. Teams can get siloed. You can get very specialized in one part of the product and miss the big picture, unless there are initiatives to get people to move teams every once in a while, or lunch and learns to learn about another team. Engineers who want to be promoted must demonstrate a higher-level vision and therefore should seek projects that allow them to work with other teams. For example, I once worked on migrating our reCAPTCHA tool, which got me in contact with the booking and login teams.
\\nAt my previous company, we also had the “Live My Life” initiative, which allowed developers to spend two weeks with another team to see how they work and their part of the product. It was a great tool for developers who wanted to move but didn’t know where.
\\nAfter more than eight years of working for startups and scale-ups, I joined a big corporation. I wasn’t sure what to expect at first, as I had worked for a big company before, but it had managed to retain that entrepreneurial mindset. This new one didn’t. Not only was it a big corporation, but it was a financial one.
\\nGone is the idea that the tech product is the money maker. In this environment, the engineering department is treated as a cost center. The product isn’t the moneymaker: the financial teams are. Engineering exists to support their initiatives, not to lead them.
\\nUnlike squads used in profit centers, developers are resources assigned to a project. These projects can last for a few sprints to a few months or years. Once done, developers will move to the next project.
\\n\\nThe methodology is also closer to the waterfall method. While the development phase can use the agile methodology, you typically have a requirement phase before the start of it. Once those requirements are gathered, a budget is created and must be approved before development can start.
\\nThere are upsides to this model. You get to work on a variety of projects, which reduces the risk of becoming siloed. The pace is slower, offering a better work-life balance. Tech stacks can be modern, even if not cutting-edge. These companies also tend to offer greater job security, as they’re typically risk-averse and less prone to layoffs.
\\nThe negatives are that the pay isn’t great and the work isn’t always exciting. In addition, by not having dedicated teams, there is less of a sense of ownership; developers will move from project to project. In addition, if engineering is treated as a cost center, addressing tech debt needs to be approved, and a budget has to be dedicated to it.
\\nIn conclusion, engineering as a profit or a cost center are two different mentalities.
\\nIn profit centers, products are divided into teams that own a scope and have their own roadmap. Things move more quickly, but engineers can become siloed and miss the big picture. The pay is good, but the job security is not guaranteed.
\\nFor cost centers, developers are resources assigned to projects. The work is slower, and the pay isn’t the greatest, but there is more work-life balance as people leave work on time at 5 pm.
\\nAs a developer, what works best for you depends a lot on what you are looking for.
\\nIn my opinion, the main differentiator is stability. In profit centers, I found that most developers were in their 20s or 30s, in a relationship but without children. In contrast, in cost centers, the average age is higher, and most seem to be married with children. They are typically at these companies for a long time, and use the benefits offered by these companies, like pension plans:
\\nWhether you’re early in your career or thinking about long-term goals, understanding how your engineering team is positioned within the organization can help you make better career choices.
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nSlow-loading pages can stem from multiple causes, which makes them one of the most challenging issues to fix in web development. Lighthouse (LH) is Google’s free and open-source website auditing tool that can help you detect and solve your web performance issues and speed up your site.
\\nIn this post, we’ll look into what Lighthouse audits are, how to interpret them, what they look like on a real website, and how to generate them in different ways. For the examples, we’ll use the homepage of the Mozilla Developer Network.
\\nLighthouse audits are automated diagnostic checks that evaluate different aspects of the user experience and performance of a web page. They are part of a Lighthouse report that you can generate by running the LH tool on a web page. We’ll see later how to do this, but the easiest way is to use the Lighthouse tab in Chrome DevTools, which is what I’ll do for the screenshots below).
\\nThere are four categories of Lighthouse audits — Performance, Accessibility, Best Practices, and SEO. In this article, we’ll focus on Lighthouse’s Performance audits:
\\nNote that when developers speak about ‘Lighthouse audits’, they typically mean Performance audits (when discussing other categories, they tend to refer to them by name, e.g. “SEO audits in Lighthouse”).
\\nCurrently, there are 38 different Lighthouse Performance audits. Each approaches web performance from a different angle to help you understand one distinct reason behind poor or mediocre Web Vitals scores, slow-loading pages, or high bounce rates.
\\nIn addition to the numeric evaluation, which shows whether a page passes the threshold for the metric used for the specific audit (e.g., initial server response time), each LH audit provides hands-on recommendations about how you can improve your result (in case it needs improvement).
\\nLighthouse audits can also help you fix your Core Web Vitals, which are part of Google’s Page Experience signals. Therefore, they have a direct impact on your search engine rankings.
\\nLighthouse measures Web Vitals (including Core Web Vitals) and shows them at the top of each LH report. However, Lighthouse is a lab (a.k.a. synthetic) website auditing tool. This means that the performance tests run in a simulated environment under controlled conditions, which can be your local browser environment or a remote server. We’ll look into this below in the “6 ways to run Lighthouse” section).
\\nGoogle also measures Web Vitals using the same formulas, but it collects the data from real Chrome users using Real User Monitoring (RUM). Then, it aggregates this data (called field data) and publishes the results in the Chrome User Experience (CrUX) Report.
\\nCore Web Vital values from the CrUX Report (i.e., Largest Contentful Paint, Interaction to Next Paint, and Cumulative Layout Shift) are used as Page Experience signals in Google’s search engine algorithm. These field scores are also what you see in Google Search Console in the Core Web Vitals Report.
\\nBy addressing your web performance issues identified by Lighthouse audits (lab data), you can improve your field scores that affect your SEO rankings (see more about the differences between lab and field data).
\\nYou can find Lighthouse’s Performance audits as a list in the Diagnostics and Passed Audits sections, below the numeric Web Vitals values:
\\nLighthouse audits use a color code system similar to the one that Google uses for the evaluation of Web Vitals, but it also includes gray in addition to the three traffic light colors:
\\nAudits that received red and yellow flags are grouped into the Diagnostics section, audits with a green flag are shown in the Passed Audits section (which can be found below the former one), and audits with a gray flag can appear in either section:
\\nNote that not all LH audits use all colors. For some audits, you can only get the gray informational flag but not the green one, as these are aspects of web performance that you should keep improving even if the page passes the audit threshold. Similarly, for some audits, you can only receive the red flag but not the yellow one, as a less-than-good result means failure for these ones in any case.
\\nAs I mentioned above, Lighthouse currently has 38 performance audits. The below examples are from a Lighthouse desktop report run on the homepage of the Mozilla Developer Network.
\\nNote, however, that Lighthouse also has a mobile report, which may have different results for the same audits since the test page runs in a mobile viewport instead of a desktop one, and LH uses CPU and network throttling settings typical of a mobile environment.
\\nThe tested page received a red flag for the Avoid large layout shift Lighthouse audit.
\\nAs you can see below, LH provides a detailed explanation, including a short description of the audit, the affected Web Vitals (here, CLS), a list of the HTML elements that caused layout shifts during the test run, and a numeric value of how much each unstable element added to the CLS score:
\\nNote that if you use a popular framework or content management system, such as Next.js or WordPress, Lighthouse also gives platform-specific recommendations (e.g., it sometimes recommends a WordPress plugin that can help fix the issue).
\\nFor the Serve images in next-gen formats audit, our test page received a yellow flag.
\\nLike above, the expanded audit shows a short explanation, the Web Vitals, the audit affects (here, FCP and LCP), the total amount of potential savings (i.e., 64 KB), and a list of the image files that could be served in next-gen formats, such as WebP or AVIF, along with the resource size and potential savings:
\\nAvoid an excessive DOM size is one of the aforementioned Lighthouse audits that never assigns the green flag, as this is a web performance indicator that can cause many other issues, such as increased memory usage, longer style calculations, and more forced layout reflows.
\\nWhile the test page passed the audit threshold (which is 800 nodes for the yellow flag and 1,400 nodes for the red one, according to the LH docs) with 705 DOM nodes, it still received the gray informational flag, along with numeric values for the Maximum DOM Depth and Maximum Child Elements metrics (ideally, these should be as small as possible):
\\nBelow, you can see an example of a Lighthouse audit our test page received a green flag for. Note that LH changes the text of passed audits to phrases suggesting success (e.g., Page didn’t prevent back/forward cache restoration).
\\n\\nLighthouse doesn’t show numeric values or longer explanations for audits in the Passed Audits section, but it still displays a short description of the audit, along with a link to a resource that can help you understand its importance (here, about bfcache, which makes on-site navigation faster):
\\nAll of these options are based on the same Lighthouse API, which means the different tools use the same formulas for the calculations.
\\nThe main difference between the options below is the testing environment, which can cause variation across the results for the same Lighthouse audit run with different tools.
\\nThe easiest way to run Lighthouse is to open Chrome DevTools in Incognito mode (which doesn’t load your browser extensions so they won’t affect the results). Navigate to the Lighthouse tab, and run either a desktop or mobile test, which will re-launch the page in either a desktop- or mobile-sized browser window.
\\nLighthouse reports run in Chrome DevTools reflect your local browsing environment, including your hardware, operating system, browser version, DevTools configuration, and more. This means that if someone else runs the same LH tests on their own machine, they can get (slightly) different results for the same audits.
\\nPageSpeed Insights (PSI) is Google’s online web performance testing tool that shows Web Vitals from the CrUX Report at the top of the PSI report (these are the field scores used in their search engine algorithm) and both the mobile and desktop Lighthouse reports run on Google’s remote servers.
\\nPSI allows you to see field and lab data below each other for the same web page, which can be especially useful for competitor analyses.
\\nLighthouse has a command line tool that you can install with the following command:
\\nnpm install -g lighthouse\\n\\n
Lighthouse CLI comes with lots of options that allow you to configure the test conditions, choose the audits you want to run, change the throttling method, generate the output in different formats (e.g., HTML, JSON, etc.), and more.
\\nLighthouse CI is a suite of developer tools that allows you to add Lighthouse to your CI/CD pipeline and set up actions such as automatically running LH for every commit, comparing the results of LH audits between different builds, failing builds when certain audits fail, and more.
\\nYou can also use the Lighthouse API directly, which can be useful if you want to integrate Lighthouse features into your own Node.js application or combine LH with other tools. For more information, check out the docs about how you can run Lighthouse programmatically as a Node.js module.
\\nIf you want to monitor your Lighthouse audits over time, compare them against your competitors’ results, or get alerts about regression, you can also use a Lighthouse monitoring dashboard, such as WebPageTest or DebugBear, which allow you to set up scheduled tests from different locations, collect historical Lighthouse data, see performance trends, and more.
\\nWeb performance issues are part of your technical debt. They can originate from things such as legacy code, low-performing infrastructure (e.g., using a shared server), conflicts between different parts of your codebase, lack of modern frontend techniques (e.g., module bundling and caching), and other factors.
\\nLighthouse audits allow you to break up your web-performance-related technical debt into small, actionable changes that you can implement one by one.
\\nTo optimize web performance on your site, prioritize audits with the worst results and the biggest performance impact, and then gradually work towards addressing smaller issues. Also, check both your desktop and mobile LH results, as your site might experience performance issues on mobile that don’t exist on desktop, and vice versa.
\\nThe most effective approach to maintaining consistent website health is to integrate Lighthouse audit checks into your regular web development workflow so that you can catch performance issues as soon as they emerge and address them before they start to escalate.
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nIf you’re looking for a FaunaDB alternative to migrate to, then you’re in the right place. This article will cover nine other platforms you can use and factors to consider when choosing an alternative:
\\nThe FaunaDB team announced that they are shutting down their services on May 30, 2025. You can explore the end of Life FAQ to learn more. This means their customers, which consist of 80,000+ development teams in over 180 countries, have to find a new database solution to work with.
\\nEach section in this post will run through the following:
\\nBefore we proceed, here’s a comparison table that summarizes how these platforms stack up against each other:
\\nPlatform | \\nPricing | \\nType of Database | \\nData Model | \\nQuery Language | \\n
---|---|---|---|---|
Convex | \\nFree tier, paid starts at $25/member/month | \\nServerless DB | \\nDocument, Real-Time, AI | \\n❌ | \\n
MongoDB Atlas | \\nForever free plan, paid from $0.08/hr | \\nNoSQL | \\nDocument | \\nMQL (MongoDB Query Language) | \\n
PlanetScale | \\nPaid plans start at $39/month | \\nDistributed SQL | \\nRelational, SQL | \\nSQL | \\n
CockroachDB | \\nFree plan, paid starts at $0.18/hr | \\nDistributed SQL | \\nRelational, SQL | \\nSQL | \\n
TiDB | \\nServerless from $0, paid starts at $0.44/hr | \\nDistributed SQL | \\nRelational, NoSQL | \\nSQL | \\n
OceanBase | \\nPaid plans start at $1.10/hr | \\nDistributed SQL | \\nRelational, SQL | \\nSQL | \\n
Supabase | \\nFree plan, paid plans from $25/month | \\nRelational (PostgreSQL) | \\nRelational, Document | \\nSQL (PostgreSQL) | \\n
Couchbase | \\nFree plan, paid plans start from $0.15/hr | \\nNoSQL | \\nDocument, Key-Value, Graph | \\nSQL++ (SQL for JSON) | \\n
SurrealDB | \\nFree plan, paid plans start from $23/month | \\nMulti-model | \\nDocument, Graph, Time-series, Relational | \\nSurrealQL | \\n
Convex is a reactive database built for developers like you who want to focus on building great applications without the hassle of managing backend infrastructure. It streamlines backend operations and offers real-time data synchronization, built-in authentication, and AI integration.
\\nWith Convex, you can quickly develop scalable and high-performance applications, leaving the complexity of managing data and backend operations to the platform.
\\nConvex has a free tier. Its paid plan starts from $25/member/month. There’s also a pricing calculator for estimating your monthly bill.
\\nMongoDB Atlas is the fully managed cloud version of MongoDB. It provides a powerful, flexible NoSQL database solution for handling large amounts of unstructured data.
MongoDB Atlas offers a multi-cloud environment and integrates seamlessly with various cloud platforms, allowing you to deploy and scale your database with ease.
\\nMongoDB Atlas has a forever-free plan. Its paid plans start at $0.08/hr.
\\nPlanetScale is a distributed, serverless MySQL-compatible database built to handle the demands of modern applications. This platform, built on top of Vitess, is used by many medium and large-scale companies.
\\nPlanetscale doesn’t offer a free tier, as they removed their hobby plan on March 6, 2024. Their paid plan starts at $39/month.
\\n\\n
CockroachDB is a distributed SQL database designed for modern applications. It offers high scalability, resilience, and ease of use. It automatically handles replication and scaling, ensuring your data is always available, even in the face of failures.
CockroachDB offers a free plan, and its paid plan starts at $0.18/hr for 2 vCPUs.
\\nTiDB by PingCAP is a distributed SQL database that allows you to easily handle transactional and analytical workloads. It combines the best aspects of relational databases and NoSQL systems, offering high availability, scalability, and strong consistency.
TiDB offers cloud-serverless, cloud-dedicated, and self-managed environments, allowing you to work with your preferred option.
\\nOceanBase is yet another distributed SQL database designed to handle large-scale, mission-critical workloads. It was developed by Ant Group and Alibaba Group in 2010, and is used by businesses of all sizes.
\\nFeatures
\\nPricing
\\nPlans start at $1.10/hr for 8C32G (2 vCPUs) instances. This includes compute and storage costs.
Supabase positions itself as “the open-source Firebase” alternative. It provides a set of powerful backend services that are easy to integrate, allowing you to quickly build applications using PostgreSQL as your database.
\\nSuperbase offers a free plan. Its paid plans start at $25/month.
\\nCouchbase is a NoSQL database platform designed to handle diverse data storage and retrieval needs. It supports a variety of use cases, from cloud to edge applications, and integrates seamlessly with mobile and IoT environments
\\nCouchbase Capella has a free plan, and its paid plans start from $0.15/hr per node. Pricing details for Couchbase Server and Mobile are available on request.
\\nSurrealDB is a multi-model database that allows you to store and manage data in various formats, including document, graph, time-series, and relational models—all within a single, unified platform.
\\nThis flexibility means you can handle diverse data types and complex relationships without having to use multiple databases.
\\n\\nSurrealDB has a free plan, and its paid plans start from $23/month.
\\nHere are some key considerations when selecting an alternative solution to FaunaDB:
\\nWhen choosing a database alternative, it’s crucial to consider the data model and confirm it can meet your application’s needs. Here are some data models you can choose from:
\\nWhen choosing a database alternative, the query language and overall developer experience should be a primary consideration. The ease of use, flexibility, and learning curve of the query language can significantly impact your speed and productivity.
\\nWhen moving from FaunaDB to another platform, migration ease is a key factor:
\\nIn this article, we’ve covered nine FaunaDB alternatives, each with unique features. However, this is by no means an exhaustive list. Many other database solutions are available that might suit your needs.
\\nAs you evaluate your options, consider factors like data models, query languages, scalability, pricing, and support to ensure the best fit for your application.
\\nSwitching from FaunaDB is not just an option, but a necessity, as its services officially ended on May 30, 2025. Existing customers must start exploring alternatives as soon as possible to ensure a smooth transition and uninterrupted service.
\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nThe world has never been more distributed — physically or virtually. Thanks to the internet, businesses can now reach users across the globe. To support this reach, infrastructure must scale accordingly. Multi-region setups help reduce latency and increase reliability by distributing resources closer to users.
\\nIn this article, you’ll learn how to set up a multi-region web application using AWS. We’ll walk through a high-level example of deploying a frontend and a small backend REST API distributed across the US and Europe.
\\nYou’ll serve the app through two subdomains. There’s us.mysuperwebsite.com
for users in the Americas and eu.mysuperwebsite.com
for European users. We’ll use us-east-1
and eu-central-1
as our AWS regions.
To begin, your frontend code needs to be compiled, built, and stored on a server. In AWS, the best service for storing static files is S3. AWS Simple Storage Service, or S3, is a hosting service for objects and files. It uses the concept of buckets, a container for your objects, to store your website. Inside your bucket, you can put all your compiled HTML, CSS, and JS files.
\\nAn S3 bucket even comes with a URL, so once you have created your bucket and uploaded your file, you can access your website to see if it looks good. An AWS bucket is by default private, but you can make it public by disabling Block all public access in the Block Public Access settings for this bucket section of the form.
\\nFor this article, however, you don’t need to, as it will be accessible through your domain.
\\nIn a multi-region infrastructure, you will need two buckets to store your application. Unfortunately, S3 buckets are not global resources, so you need a bucket for us-east-1 and another for eu-central-1.
\\nSteps:
\\nTo make your frontend accessible to the public, use CloudFront, AWS’s CDN service. It caches content and improves load times.
\\nTo support a multi-region infrastructure, separate CloudFront distributions in each region pointing to the correct S3 bucket are necessary.
\\nWhen creating your CloudFront distribution, specify the origin as the S3 bucket you created.
\\nFinally, you set up your DNS configuration for each URL to point to the correct CloudFront distribution with Route 53. This service provides you with Domain Name System (DNS), domain name registration, and more.
\\nInside Route 53, you can register your domain name. You can also transfer your domain name. Depending on your needs, the process won’t be the same, so here is the documentation detailing each process.
\\nTo support our multi-region infrastructure, you will use a routing policy called geolocation. This policy allows you to route the traffic to the correct resources based on the origin of their DNS queries.
\\n\\nThis means that European users requesting the EU subdomain would be served the content of your frontend hosted on the EU S3 bucket and cached by the EU Cloudfront.
\\nBy setting up a fallback, you can also specify that if it is down, thanks to health checks, you can redirect to the US version. If you prefer to improve performance, you can use latency-based routing. As the name suggests, it redirects users to the region with the least latency rather than the geographical location.
\\nIn Route 53, you will need a hosted zone for your domain (i.e., mysuperwebsite.com). In AWS, a hosted zone is a container for all your DNS records for this domain. Inside this hosted zone, you can create two CNAME records (one for the EU and one for the US) and define the distribution it points to, including the fallback.
\\nFor Route 53 to handle the fallback when the main region is down, don’t forget to set up a health check that will ping your website to make sure it is still up and running.
\\nFor example, for the EU, it would look like — eu.mysuperwebsite.com (CNAME record):
\\nHere is an example to create the CNAME record:
\\nBy this point, you should have your DNS set up for each region that is linked to your CloudFront distribution, pointing to the correct S3 bucket.
\\nFor your backend, use AWS Lambda with API Gateway.
\\nLambda functions let you run code without managing servers. Concretely, when your frontend makes a call to your backend, i.e., through your API, a Lambda will be spun up to execute the code inside the function.
\\nLambdas are great for web applications, provided that your function doesn’t take more than 15 minutes. For runtime longer than 15 minutes, if you still want a serverless architecture, you can split up your logic with AWS Step Functions. Lambdas are region-specific resources and must be duplicated to support a multi-region architecture.
\\nThankfully, creating a Lambda function is straightforward. Click on Create Function and fill out the form to create your Lambda:
\\nOnce created, you will be taken to your Lambda details page to upload and edit your code:
\\nYou can also test your Lambda to make sure your function is working as intended. You can see your Lambda’s execution thanks to the CloudWatch logs. Go to Monitor and click on View CloudWatch Logs. This will bring you to a page with all the log groups. Logs are grouped by timestamps.
\\nCloudWatch logs are also region-specific, so if you don’t find a group corresponding to the timestamp you are looking for, make sure you are in the right region.
\\nFinally, to make your lambda accessible to the internet, you will need an API Gateway. This AWS service allows you to create a REST API with endpoints callable with an HTTP request. API Gateway is also region-specific, which means you will have different URLs and, more specifically, API Keys.
\\nCreate an API Gateway by giving it a name. Once done, you can create API endpoints called resources inside your gateway:
\\nOnce the resource has been created, it will appear in your list. You can then click on it and start creating methods (i.e., GET, POST, …):
\\nClick on Create method, and in this form, you can fill out the form and specify the lambda function you created previously.
\\nWhen done, you can head to API Settings and grab the Default endpoint, aka the URL from which your REST API is accessible. Use a tool like Postman to test whether your API works correctly.
\\nAs an option, you can go a step further and add some protection for your REST API with an API Key. In AWS, API keys are typically attached to usage plans. These plans allow you to set quotas and throttles to prevent DDoS attacks or users abusing your API, which would result in extra costs for you. Here is the documentation detailing usage plans and API Keys.
\\nOnce your lambdas and API Gateways are created, you can go into your frontend and change the configuration to make sure it hits the correct region.
\\nFor clarity, I suggest splitting your .env file by region. Meaning, you will have a .env.us-east-1 and a .env.eu-central-1 with their specific configurations:
\\nREACT_APP_CURRENT_REGION=<region>\\nREACT_APP_API_URL=<api_gateway_url>\\nREACT_APP_API_KEY=<api_gateway_api_key>\\n
Then, in your build process, grab the correct env file depending on the region being built so that when your JS is compiled, it uses the correct API_URL.
\\nIn this article, every step you read was done manually. However, AWS developed a software development framework called Cloud Development Kit, or CDK. This framework allows you to create and manage cloud resources through code.
\\nYou can integrate your CDK code with your pipeline and automate your cloud deployment process. This framework is helpful in multi-region deployment, as the larger your infrastructure grows, the harder it will be to maintain manually.
\\nAutomating your infrastructure maintenance reduces potential human errors and helps with consistency.
\\nYou now understand how to create a multi-region web application with AWS. By combining S3, CloudFront, and Route 53 for the frontend, and Lambda and API Gateway for the backend, you can:
\\nMulti-region setups are essential for global businesses. With the optional use of CDK, managing this complexity becomes much more scalable and maintainable.
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nIn this article, I will explain the theoretical concepts behind the WebSocket protocol. Additionally, I’ll demonstrate how to build a real-time collaborative document editing app with a Node.js backend and React frontend using the WebSocket protocol.
\\nIt was previously quite common for most web apps to have a closely connected backend and frontend, so the apps served data with the view content to the user’s browser. Nowadays, we typically develop loosely coupled, separate backends and frontends by connecting the two with a network-oriented communication line.
\\nFor example, developers often use the RESTful pattern with the HTTP protocol to implement a communication line between the frontend and backend for data transfer. But the HTTP-based RESTful concept uses a simplex communication (one-way), so we can’t push data directly from the server (backend) to the client (frontend) without implementing workarounds such as polling.
\\nThe WebSocket protocol solves this drawback of the traditional HTTP pattern, offers a full-duplex (or two-way) communication mechanism, and helps developers build real-time apps, such as chat, trading, and multi-player game apps.
\\nThe WebSocket protocol offers persistent, real-time, full-duplex communication between the client and the server over a single TCP socket connection.
\\nThe WebSocket protocol has only two agendas: to open up a handshake and to help the data transfer. Once the server accepts the handshake request sent by the client and initiates a WebSocket connection, they can send data to each other with less overhead at will.
\\nWebSocket communication takes place over a single TCP socket using either WS (port 80) or WSS (port 443) protocol. Every browser except Opera Mini provides admirable support for WebSockets at the time of writing, according to Can I Use.
\\nThe following video explains how the WebSocket protocol works and benefits users compared to the traditional HTTP protocol:
\\nEditor’s note: This post was updated by Isaac Okoro in May 2025 to add a native WebSocket example using React Hooks, explain the usage of react-use-websocket, and clarify the role of Socket.IO.
\\nHistorically, creating web apps that needed real-time data required an abuse of HTTP protocol to establish bidirectional data transfer. There were multiple methods used to achieve real-time capabilities by enabling a way to send data directly from the server to clients, but none of them were as efficient as WebSocket. HTTP polling, HTTP streaming, Comet, and SSE (server-sent events) all have their drawbacks.
\\nThe very first attempt to solve the problem was by polling the server at regular intervals. The normal polling approach fetches data from the server frequently based on an interval defined on the client side (typically using setInterval
or recursive setTimeout
). On the other hand, the long polling approach is similar to normal polling, but the server handles the timeout/waiting time.
The HTTP long polling lifecycle is as follows:
\\nThere were a lot of loopholes in long polling — header overhead, latency, timeouts, caching, and so on.
\\nThis mechanism saved the pain of network latency because the initial request is kept open indefinitely. The request is never terminated, even after the server pushes the data. The first three lifecycle methods of HTTP streaming are the same in HTTP long polling.
\\nWhen the response is sent back to the client, however, the request is never terminated; the server keeps the connection open and sends new updates whenever there’s a change. HTTP streaming is a generic concept, and you can design your own streaming architecture with available low-level streaming APIs in server-side and client-side modules, i.e., building an HTTP streaming solution with Node streams and the browser’s Fetch API.
\\nWith SSE, the server pushes data to the client, similar to HTTP streaming. SSE is a standardized form of the HTTP streaming concept and comes with a built-in browser API.
\\nA chat or gaming application cannot completely rely on SSE. This is because we can’t send data from the client to the server using the same server-side event stream, as SSE isn’t full-duplex and only lets you send data directly from the server to clients.
\\nThe perfect use case for SSE would be, for example, the Facebook news feed: whenever new posts come in, the server pushes them to the timeline. SSE is sent over traditional HTTP and has restrictions on the number of open connections.
\\nLearn more about the SSE architecture from this GitHub Gist file. These methods were not just inefficient compared to WebSockets. The code that went into them appeared as a workaround to make a request-reply-type protocol full-duplex-like.
\\nWebSockets are designed to supersede existing bidirectional communication methods. The existing methods described above are neither reliable nor efficient when it comes to full-duplex real-time communications.
\\n\\nWebSockets are similar to SSE but also allow messages to be sent from the client to the server. Connection restrictions are no longer an issue because data is served over a single TCP socket connection.
\\nAs mentioned in the introduction, the WebSocket protocol has only two agendas: 1.) to open up a handshake, and 2.) to help the data transfer.
\\nLet’s see how WebSockets fulfill those agendas. To do that, I’m going to spin off a Node.js server and connect it to a client built with React.
\\nFirst, download or clone this GitHub repository onto your computer. This repository contains the source code of the sample collaborative document editing app. Open it with your favorite code editor. You will see two directories as follows:
\\nserver: A Node.js WebSocket server that handles the document editor’s backend logic\\nclient: The React app that connects to the WebSocket server for real-time features\\n\\n
You can start the document editor app with the following commands:
\\n#-- Set up and start the server\\ncd server\\nnpm install # or yarn install\\nnpm start # or yarn start\\n\\n#-- Set up and start the client\\ncd client\\nnpm install # or yarn install\\nnpm start # or yarn start\\n\\n
Run the app with the above commands, try to open it with two browser windows, then edit the document from both:
\\nLet’s study the source code and learn how it works using WebSockets!
\\nWe can make use of a single port to spin off the HTTP server and attach the WebSocket server. The code below (taken from server/index.js
) shows the creation of a simple HTTP server. Once it is created, we tie the WebSocket server to the HTTP port:
const { WebSocketServer } = require(\'ws\');\\nconst http = require(\'http\');\\n\\n// Spinning the HTTP server and the WebSocket server.\\nconst server = http.createServer();\\nconst wsServer = new WebSocketServer({ server });\\nconst port = 8000;\\nserver.listen(port, () => {\\n console.log(`WebSocket server is running on port ${port}`);\\n});\\n\\n
In the sample project, I used the popular ws library to attach a WebSocket server instance to an HTTP server instance. Once the WebSocket server is attached to the HTTP server instance, it will accept the incoming WebSocket connection requests by upgrading the protocol from HTTP to WebSocket.
\\nI maintain all the connected clients as an object in my code with a unique key generated via the uuid
package on receiving their request from the browser:
// I\'m maintaining all active connections in this object\\nconst clients = {};\\n\\n// A new client connection request received\\nwsServer.on(\'connection\', function(connection) {\\n // Generate a unique code for every user\\n const userId = uuidv4();\\n console.log(`Received a new connection.`);\\n\\n // Store the new connection and handle messages\\n clients[userId] = connection;\\n console.log(`${userId} connected.`);\\n});\\n\\n
When you open the app with a new browser tab, you’ll see a generated UUID on your terminal as follows:
\\nWhen initiating a standard HTTP request to establish a connection, the client includes the Sec-WebSocket-Key
within the request headers. The server encodes and hashes this value and adds a predefined GUID. It echoes the generated value in the Sec-WebSocket-Accept
in the server-sent handshake.
Once the request is accepted in the server (after necessary validations in production), the handshake is fulfilled with status code 101
(switching protocols). If you see anything other than status code 101
in the browser, the WebSocket upgrade has failed, and the normal HTTP semantics will be followed.
The Sec-WebSocket-Accept
header field indicates whether the server is willing to accept the connection or not. Also, if the response lacks an Upgrade
header field, or the Upgrade
does not equal websocket
, it means the WebSocket connection has failed.
The successful WebSocket server handshake looks like this:
\\nHTTP GET ws://127.0.0.1:8000/ 101 Switching Protocols\\nConnection: Upgrade\\nSec-WebSocket-Accept: Nn/XHq0wK1oO5RTtriEWwR4F7Zw=\\nUpgrade: websocket\\n\\n
At the client level, I use the React–use-websocket library to initiate a WebSocket connection. We can also use the built-in native WebSocket browser API without any third-party package. However, using the browser API directly in React functional components typically generates complex code.
\\nBear with me: for those who may not fancy the react-use-websocket library, I will show you how it is done using the native WebSocket afterwards.
\\nWe can also create a custom React Hook for WebSocket connections, but then we will re-invent the wheel and create a react-use-websocket clone. React react-use-websocket offers the useWebSocket
Hook to manage WebSocket connections from React functional components.
As soon as the request is accepted by the server, we will see WebSocket connection established
on the browser console.
Here’s the initial scaffold to create the connection to the server via the App
component (in client/src/App.js
):
// App.js basic setup\\nimport React, { useEffect, useRef } from \'react\';\\nimport useWebSocket from \'react-use-websocket\';\\n\\nconst WS_URL = \'ws://127.0.0.1:8000\';\\n\\nfunction App() {\\n const wsRef = useRef(null);\\n\\n const { getWebSocket, readyState } = useWebSocket(WS_URL, {\\n onOpen: () => {\\n console.log(\'WebSocket connection established.\');\\n wsRef.current = getWebSocket();\\n },\\n onClose: () => {\\n console.log(\'WebSocket connection closed.\');\\n }\\n });\\n\\n // Lifecycle cleanup\\n useEffect(() => {\\n return () => {\\n if (wsRef.current && wsRef.current.readyState === WebSocket.OPEN) {\\n wsRef.current.close();\\n }\\n };\\n }, []);\\n\\n return (\\n <div>Hello WebSockets!</div>\\n );\\n}\\n\\nexport default App;\\n\\n
The following headers are sent by the client to establish the handshake:
\\nHTTP GET ws://127.0.0.1:8000/ 101 Switching Protocols\\nUpgrade: websocket\\nConnection: Upgrade\\nSec-WebSocket-Key: vISxbQhM64Vzcr/CD7WHnw==\\nOrigin: http://localhost:3000\\nSec-WebSocket-Version: 13\\n\\n
Now that the client and server are connected via the WebSocket handshake event, the WebSocket connection can transmit messages as it receives them, thereby fulfilling the second agenda of the WebSocket protocol.
\\nUsers can join together and edit a document in the sample React app. The app tracks two events:
\\nThe protocol allows us to send and receive messages as binary data or UTF-8 (N.B., transmitting and converting UTF-8 has less overhead). Try inspecting WebSocket messages by using Chrome DevTools to see sent/received messages, as shown in the preview above.
\\nUnderstanding and implementing WebSockets is very easy as long as we have a good understanding of the socket events: onopen
, onclose
, and onmessage
. The terminologies are the same on both the client and the server side.
From the client, when a new user joins in or when content changes, we trigger a message to the server using sendJsonMessage
to take the new information to the server:
/* When a user joins, I notify the\\nserver that a new user has joined to edit the document. */\\n// For the login and content change section\\nfunction LoginSection({ onLogin }) {\\n const [username, setUsername] = useState(\'\');\\n\\n // Share WebSocket connection from parent\\n useWebSocket(WS_URL, {\\n share: true,\\n filter: () => false // Don\'t process messages in this component\\n });\\n\\n const logInUser = useCallback(() => {\\n if(!username.trim()) {\\n return;\\n }\\n onLogin && onLogin(username);\\n }, [username, onLogin]);\\n\\n return (\\n <form onSubmit={(e) => { e.preventDefault(); logInUser(); }}>\\n <input \\n value={username} \\n onChange={(e) => setUsername(e.target.value)} \\n />\\n <button type=\\"submit\\">Join</button>\\n </form>\\n );\\n}\\n // ----\\n // ----\\n\\n/* When content changes, we send the\\ncurrent content of the editor to the server. */\\nfunction Editor({ html, onContentChange }) {\\n const editorRef = useRef(null);\\n\\n // Sync with incoming content changes\\n useEffect(() => {\\n if (editorRef.current && editorRef.current.value !== html) {\\n editorRef.current.value = html;\\n }\\n }, [html]);\\n\\n const handleHtmlChange = useCallback((e) => {\\n onContentChange({\\n type: \'contentchange\',\\n content: e.target.value\\n });\\n }, [onContentChange]);\\n\\n return (\\n <textarea \\n ref={editorRef}\\n value={html} \\n onChange={handleHtmlChange} \\n />\\n );\\n}\\n\\n
Listening to messages from the server is pretty simple. For example, see how the History
component listens to user events and renders the activity log:
function History() {\\n const { lastJsonMessage } = useWebSocket(WS_URL, {\\n share: true,\\n filter: isUserEvent\\n });\\n\\n const activities = useMemo(() => {\\n return lastJsonMessage?.data?.userActivity || [];\\n }, [lastJsonMessage]);\\n\\n return (\\n <ul>\\n {activities.map((activity, index) => (\\n <li key={`activity-${index}`}>{activity}</li>\\n ))}\\n </ul>\\n );\\n}\\n\\n
Here we used the share: true
setup to reuse the existing WebSocket connection we initiated in the App
component. By default, the useWebSocket
Hook re-renders the component whenever the WebSocket connection receives a new message from the server and the connection state changes.
As a result, the History
component will re-render for user and editor events. So, as a performance enhancement, we use the filter: isUserEvent
setup to re-render the component only for user events.
As pointed out earlier, using the native WebSocket, our code will look like this:
\\n// NativeWebSocketExample.js\\nimport React, { useState, useEffect, useRef, useCallback } from \'react\';\\n\\nconst NativeWebSocketExample = () => {\\n // State for tracking messages and connection status\\n const [messages, setMessages] = useState([]);\\n const [inputMessage, setInputMessage] = useState(\'\');\\n const [connectionStatus, setConnectionStatus] = useState(\'Disconnected\');\\n\\n // Use useRef to keep a reference to the WebSocket instance\\n // This ensures the same instance persists between renders\\n const socketRef = useRef(null);\\n\\n // Setup WebSocket connection\\n useEffect(() => {\\n // Create a new WebSocket connection\\n const socket = new WebSocket(\'ws://localhost:8000\');\\n\\n // Store the socket in our ref\\n socketRef.current = socket;\\n\\n // Setup event handlers\\n socket.onopen = () => {\\n console.log(\'WebSocket connection established\');\\n setConnectionStatus(\'Connected\');\\n };\\n\\n socket.onmessage = (event) => {\\n // Parse incoming messages\\n try {\\n const data = JSON.parse(event.data);\\n setMessages((prevMessages) => [...prevMessages, data]);\\n } catch (error) {\\n console.error(\'Error parsing message:\', error);\\n setMessages((prevMessages) => [...prevMessages, { text: event.data, type: \'text\' }]);\\n }\\n };\\n\\n socket.onerror = (error) => {\\n console.error(\'WebSocket error:\', error);\\n setConnectionStatus(\'Error\');\\n };\\n\\n socket.onclose = () => {\\n console.log(\'WebSocket connection closed\');\\n setConnectionStatus(\'Disconnected\');\\n };\\n\\n // Clean up the WebSocket connection when the component unmounts\\n return () => {\\n console.log(\'Closing WebSocket connection\');\\n if (socketRef.current && socketRef.current.readyState === WebSocket.OPEN) {\\n socketRef.current.close();\\n }\\n };\\n }, []); // Empty dependency array means this runs once on mount\\n\\n // Function to send messages\\n const sendMessage = useCallback(() => {\\n if (\\n socketRef.current && \\n socketRef.current.readyState === WebSocket.OPEN && \\n inputMessage.trim() !== \'\'\\n ) {\\n // Create a message object\\n const messageObj = {\\n type: \'message\',\\n text: inputMessage,\\n timestamp: new Date().toISOString()\\n };\\n\\n // Send as JSON string\\n socketRef.current.send(JSON.stringify(messageObj));\\n\\n // Clear input field after sending\\n setInputMessage(\'\');\\n }\\n }, [inputMessage]);\\n\\n // Handle Enter key in the input field\\n const handleKeyPress = (e) => {\\n if (e.key === \'Enter\') {\\n sendMessage();\\n }\\n };\\n\\n return (\\n <div className=\\"websocket-container\\">\\n <div className=\\"status-bar\\">\\n Status: <span className={`status-${connectionStatus.toLowerCase()}`}>{connectionStatus}</span>\\n </div>\\n\\n <div className=\\"message-container\\">\\n {messages.length === 0 ? (\\n <div className=\\"no-messages\\">No messages yet</div>\\n ) : (\\n messages.map((msg, index) => (\\n <div key={index} className=\\"message\\">\\n <div className=\\"message-text\\">{msg.text}</div>\\n {msg.timestamp && (\\n <div className=\\"message-time\\">\\n {new Date(msg.timestamp).toLocaleTimeString()}\\n </div>\\n )}\\n </div>\\n ))\\n )}\\n </div>\\n\\n <div className=\\"input-area\\">\\n <input\\n type=\\"text\\"\\n value={inputMessage}\\n onChange={(e) => setInputMessage(e.target.value)}\\n onKeyPress={handleKeyPress}\\n placeholder=\\"Type a message...\\"\\n disabled={connectionStatus !== \'Connected\'}\\n />\\n <button \\n onClick={sendMessage}\\n disabled={connectionStatus !== \'Connected\' || inputMessage.trim() === \'\'}\\n >\\n Send\\n </button>\\n </div>\\n </div>\\n );\\n};\\n\\nexport default NativeWebSocketExample;\\n\\n// Usage in another component:\\n/*\\nimport React from \'react\';\\nimport NativeWebSocketExample from \'./NativeWebSocketExample\';\\n\\nfunction App() {\\n return (\\n <div className=\\"App\\">\\n <h1>WebSocket Chat Example</h1>\\n <NativeWebSocketExample />\\n </div>\\n );\\n}\\n\\nexport default App;\\n\\n
Using the native WebSocket is particularly useful when you need better control over your WebSocket implementation, or when you want to minimize external dependencies in your React application.
\\nIn the server, we simply have to catch the incoming message and broadcast it to all the clients connected to the WebSocket.
\\nThis is one of the differences between the infamous Socket.IO and WebSocket; we need to manually send the message to all clients when we use WebSocket. Socket.IO is a full-fledged library, so it offers built-in methods to broadcast messages to all connected clients or specific clients based on a namespace.
\\nSee how we handle broadcasting in the backend by implementing the broadcastMessage()
function:
function broadcastMessage(json) {\\n // We are sending the current data to all connected active clients\\n const data = JSON.stringify(json);\\n for(let userId in clients) {\\n let client = clients[userId];\\n if(client.readyState === WebSocket.OPEN) {\\n client.send(data);\\n }\\n };\\n}\\n\\n
When the browser is closed, the WebSocket invokes the close
event, which allows us to write the logic to terminate the current user’s connection. In my code, I broadcast a message to the remaining users when a user leaves the document:
function handleDisconnect(userId) {\\n console.log(`${userId} disconnected.`);\\n const json = { type: typesDef.USER_EVENT };\\n const username = users[userId]?.username || userId;\\n userActivity.push(`${username} left the document`);\\n json.data = { users, userActivity };\\n delete clients[userId];\\n delete users[userId];\\n broadcastMessage(json);\\n}\\n\\n// User disconnected\\nconnection.on(\'close\', () => handleDisconnect(userId));\\n\\n
Test this implementation by closing a browser tab that has this app. You’ll see information on both the history section of the app and the browser console, as shown in the following preview:
\\nIn the sample app, we used WS as the protocol identifier of the WebSocket connection URL. WS refers to a normal WebSocket connection that gets established via the plain-text HTTP protocol.
\\nThis connection stream is not as secure as traditional http://
URLs and can be intercepted by external entities, so WebSocket offers the WebSocket Secure (WSS) mode via the WSS protocol identifier by integrating the SSL/TLS protocol. Similar to https://
URLs, the WSS-based connections cannot be intercepted by external entities because data gets encrypted with the SSL/TLS protocol.
Here is a summary of the differences between WS and WSS:
\\nComparison factor | \\nWS | \\nWSS | \\n
---|---|---|
The abbreviation stands for | \\nWebSocket | \\nWebSocket Secure | \\n
Connection initialization protocol | \\nHTTP | \\nHTTPS (HTTP Secure) | \\n
Data encryption | \\nNo | \\nYes, via the SSL/TLS protocol using RSA-like algorithms | \\n
Transport layer security | \\nNo | \\nYes, data encryption is handled via the SSL/TLS protocol | \\n
Application layer security | \\nNo, the developer should handle these protections | \\nNo, the developer should handle these protections | \\n
Using WSS over WS is recommended to prevent man-in-the-middle (MITM) attacks, but using WSS doesn’t implement cross-origin and application-level security. Make sure to implement necessary URL origin checks in WebSocket servers and a strong authentication method (i.e., a token-based technique) in applications to prevent application-level security vulnerabilities.
\\nFor example, if a chat app needs login/signup, make sure to let only authenticated users establish WebSocket connections by validating a token before the HTTP handshake succeeds.
\\nUsing WSS instead of WS is programmatically simple. You need to use wss
in the WebSocket connection URL in the React app:
const WS_URL = \'wss://example.com\';\\n\\n
Also, make sure to use the https
module with digital certificates/keys as follows:
const https = require(\'https\');\\nconst fs = require(\'fs\');\\n\\nconst httpsOptions = {\\n key: fs.readFileSync(\'./crypto/key.pem\'),\\n cert: fs.readFileSync(\'./crypto/certificate.pem\')\\n};\\n\\nconst server = https.createServer(httpsOptions);\\n\\n
That’s all for enabling WSS from the programming perspective, but from the networking perspective, you have to generate cryptographic keys and certificates via a trusted Certificate Authority (CA).
\\nNode.js doesn’t offer an inbuilt API to create WebSocket servers or client instances, so we should use a WebSocket library on Node.js. For example, we used the ws library in this tutorial.
\\nThe browser standard offers the built-in WebSocket API to connect with WebSocket servers, so selecting an external library is optional on the browser. Using libraries may improve your code readability on frontend frameworks and boost productivity as they come with pre-developed features.
\\nFor example, we used the React useWebSocket library with React to connect to the WebSocket server, writing less implementation code. React-use-websocket is a specialized Hook library that helps simplify WebSocket integration in React applications.
\\nIt provides a simple way to establish WebSocket connections, manage their lifecycle, and handle real-time data exchange, all within a React component.
\\nimport React from \'react\';\\nimport useWebSocket from \'react-use-websocket\';\\n\\nfunction WebSocketComponent() {\\n const socketUrl = \'ws://127.0.0.1:8000\';\\n\\n const {\\n sendMessage,\\n sendJsonMessage,\\n lastMessage,\\n lastJsonMessage,\\n readyState,\\n getWebSocket\\n } = useWebSocket(socketUrl, {\\n onOpen: () => console.log(\'WebSocket connection established\'),\\n onClose: () => console.log(\'WebSocket connection closed\'),\\n shouldReconnect: (closeEvent) => true, // Attempt to reconnect on all close events\\n reconnectAttempts: 10,\\n reconnectInterval: 3000\\n });\\n\\n // Example of sending a message\\n const handleSendMessage = () => {\\n sendJsonMessage({ type: \'hello\', content: \'Hello Server!\' });\\n };\\n\\n // Process incoming messages\\n React.useEffect(() => {\\n if (lastJsonMessage) {\\n console.log(\'Received message:\', lastJsonMessage);\\n }\\n }, [lastJsonMessage]);\\n\\n return (\\n <div>\\n <button onClick={handleSendMessage}>Send Message</button>\\n <div>Last message: {lastMessage ? lastMessage.data : null}</div>\\n <div>Connection status: {readyState}</div>\\n </div>\\n );\\n}\\n\\n
The library is available on npm at react-use-websocket and has gained significant popularity for React WebSocket implementations with over 227,000 weekly downloads.
\\nReact useWebSocket is not the only library that lets you work with WebSockets in React. Choose a preferred WebSocket library for React from the following table based on the listed pros and cons:
\\nWebSocket library | \\nDescription | \\nPros | \\nCons | \\n
---|---|---|---|
socket.io-client | \\nBidirectional and low-latency communication for every platform | \\nFallbacks to HTTP polling if WebSocket connections are not supported. Offers many inbuilt features, such as broadcasting, offline message queue, client namespaces, automatic re-connect logic, etc. | \\nDoesn’t offer React-specific APIs, so developers have to write connectivity, cleanup code with core React Hooks. App bundle size increment is higher than React itself | \\n
React useWebSocket | \\nReact Hook for WebSocket communication | \\nOffers a pre-developed React hook with inbuilt React-specific features, such as auto re-render, using a shared WS object among components, etc. Supports Socket.IO connections and implements automatic re-connect logic. Lightweight compared to other libraries | \\nDoesn’t work inside React class components. Requires React 16.8 or higher. Doesn’t implement fallback transport methods | \\n
sockJS-client | \\nA JavaScript library for browser that provides a WebSocket-like object | \\nFallbacks to HTTP polling if WebSocket connections are not supported. Offers a W3C WebSockets API-like interface | \\nDoesn’t offer React-specific APIs. App bundle size increment is higher than React itself | \\n
Sarus | \\nA minimal WebSocket library for the browser | \\nA minimal library that implements an offline message queue and re-connect logic. Offers a simple API that anyone can learn in seconds | \\nDoesn’t offer React-specific APIs. Doesn’t implement fallback transport methods | \\n
N.B., Bundle size increments were calculated using the BundlePhobia npm package size calculator tool.
\\nIf your app doesn’t need a fallback transport method, selecting react-use-websocket is a good decision. You can also drop reactuse-WebSocket to make a more lightweight app bundle by using the native WebSocket browser API. Choose a library or use the native browser API according to your preference.
\\nNote that Socket.IO and SockJS work as fully-featured bidirectional messaging frameworks that use WebSockets as the first transport method. They offer productive inbuilt features with fallback mechanisms, but they also increase your app bundle size. They want you to use a specific library for implementing the server (i.e., using the socket.io package for Node-based servers).
\\nWhen is it appropriate to use Socket.IO, and how does it differ from native WebSocket?
\\nSocket.IO is quite a great choice for React apps, especially when you want features like chat or live updates without managing the messy parts yourself. As I mentioned earlier, it uses WebSocket underneath when possible, but then it adds its own protocol on top. This gives us extras like automatic reconnection, fallback support, and easy event handling, all of which fit nicely with React’s Hook-based structure.
\\nNative WebSocket, on the other hand, is the raw, browser-standard option. It’s faster and lightweight, but it comes with more manual work, like handling reconnections and parsing messages. It’s your best option if you need full control.
\\nWebSockets are one of the most interesting and convenient ways to achieve real-time capabilities in a modern application. They give us a lot of flexibility to leverage full-duplex communications. I’d strongly suggest working with WebSocket using the native WebSocket API or other available libraries that use WebSocket as a transport method.
\\nHappy coding! 🙂
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nRegardless of the size of your team, sharing information effectively is essential. Your teammates need access to the information necessary for them to do their jobs well. While many tools and processes can support this, not every technique will suit every team — company culture, team structure, and maturity all play a role.
\\nIn this article, I’ll outline proven techniques for scaling knowledge across engineering teams — including product documentation, tech lead office hours, video-enhanced PRs, and structured knowledge-sharing. These approaches aren’t theoretical; they’ve delivered results in real-world teams and can do the same for yours.
\\nStoring product documentation in tools like Confluence is one of the most popular techniques for sharing information. When done well, they can be a great manual of your product — helping engineers, customer support, and product managers find answers quickly.
\\nFor this technique to be effective, however, it must be implemented within the culture. Many developers roll their eyes at documentation. And perhaps rightly so. It’s often outdated, incomplete, or hard to navigate. And without a strong culture of maintenance, it stays that way.
\\nI once worked for a healthcare company. Their product, among other things, offered customers the ability to search and book appointments with doctors. Doctors regularly called customer support, confused by how our appointment search worked. We decided that if a customer had a question, the answer should already exist in the product documentation. If it didn’t, support would open a bug ticket. When a developer picked it up, even if the issue wasn’t a bug, they had to update or link to the documentation before closing it.
\\nThis forced us to treat documentation as a product asset — not an afterthought. It also motivated our team to build internal tools so customers could find answers themselves, reducing the support load and bug backlog.
\\nPro tip — Embed documentation updates into your “definition of done.” That’s how you make documentation a living system, not a forgotten wiki.
\\nImagine being assigned a bug, diving into the code, doing a git blame
to see the latest change, and being brought to a GitHub PR page with 0 descriptions. All that is left is for you to read the code change to understand why it was done this way. Many developers have run into this issue and felt the frustration.
Well-crafted PR templates can help. By standardizing what developers should include — such as change descriptions, screenshots, or links to related tasks — you make your codebase easier to navigate and debug later on.
\\nI would even suggest taking it a step further for distributed teams by including video recordings. I once worked with an education startup that was fully distributed to developers in both Europe and South America. Getting everyone to work at the same time wasn’t always possible, and we wanted to avoid the back-and-forth that can occur when a PR is created and the reviewers don’t understand a change.
\\n\\nTo prevent this, we started creating video recordings to explain the code change and include them in the pull request template. Developers would walk through their changes, explain the why behind their decisions, and highlight anything reviewers should watch for.
\\nI strongly recommend implementing pull request templates as soon as possible. Regardless of the size of your team, documenting code changes will help streamline code review, track the code version, and make debugging simpler.
\\nGet started — See the many PR templates created by Steve Mao, including bug issues or new features, to help you get started.
\\nEngineers can easily get siloed — especially in larger companies or those with many parallel projects. If your company doesn’t have a regular meeting time where engineers can learn what other people are doing, you lose a precious opportunity for them to collaborate and share ideas.
\\nKnowledge-sharing sessions help break down those walls.
\\nThese sessions come in many forms — lunch-and-learns, project postmortems, or 15-minute lightning talks. The format matters less than the consistency and usefulness of the content.
\\nAt one company, I led a project that moved our email system from AWS Pinpoint to Braze. After it wrapped, I hosted a one-hour session for the entire engineering team. I covered the new system’s capabilities, how we warmed IPs, and some Google Analytics tracking quirks we discovered.
\\nMany devs who hadn’t touched marketing tools before found it enlightening — and it sparked a series of future talks on other projects.
\\n\\nPro tip — Encourage short, informal presentations for teams new to this practice. As comfort grows, you can graduate to more structured sessions across teams or departments.
\\nProduct documentation supports user and cross-functional teams, but technical documentation is built by and for engineers. This type of documentation typically covers dev environments, deployment processes, service ownership, and — crucially — technical debt.
\\nWhen I worked at my healthcare company, we dealt with significant technical debt. I remember a particular case where a coworker and I mapped and documented a particularly complex bug. Instead of patching it and moving on, we documented the entire scenario, proposed a few solutions, and shared it with the broader team. That write-up became a reference for similar issues down the line.
\\nEven if a fix doesn’t get prioritized, documenting it builds a paper trail that outlives staff changes. It also gives engineers — especially those aiming to become seniors — a way to demonstrate ownership and strategic thinking.
\\nYou could even create a space just for tech debt write-ups. You’ll be thankful the next time someone asks, “Why haven’t we fixed this yet?”
\\nI have only seen this technique implemented once in a distributed, fully asynchronous company.
\\nThe idea is simple. Tech leads dedicate a specific block of time for questions or pair programming sessions with the rest of the team. Instead of fielding numerous one-off requests, they could block periods of time for questions and then focus on their work the rest of the day. This technique improves accessibility and ensures that engineers don’t have to wait for a tech lead to be available.
\\nThis format works especially well in remote-first teams, where quick Slack pings can easily pile up. Office hours offer predictability for the team and focus for the lead.
\\nYou could even rotate responsibility across senior engineers or theme different office hours (e.g., “DevOps Thursdays” or “Data Fridays”).
\\nKnowledge sharing isn’t just about writing things down, especially as you are scaling.
\\nThe techniques in this article aren’t one-size-fits-all. Start with one or two that fit your team’s size and culture. I’ve also built this quick summary table so you can get started:
\\nTechnique | \\nBest for | \\nTeam size | \\nTeam environment fit | \\nKey benefits | \\nWatchouts | \\n
---|---|---|---|---|---|
Product documentation | \\nEstablishing a central knowledge source | \\nSmall to large | \\nWorks well in process-driven, detail-oriented, async teams | \\nImproves onboarding, supports customer-facing teams | \\nNeeds continuous updating; embed into workflows | \\n
Pull request templates and videos | \\nMaking code reviews clearer and asynchronous | \\nMedium to large | \\nIdeal for distributed, async-first teams with code ownership | \\nStreamlines handoff, reduces review friction | \\nUse short videos; enforce template usage | \\n
Knowledge-sharing sessions | \\nPromoting cross-team learning and alignment | \\nSmall to medium | \\nFits collaborative, communicative, hybrid, or in-person teams | \\nBuilds culture, encourages peer learning | \\nMake it regular; offer quick, informal formats | \\n
Internal technical documentation | \\nRetaining technical and architectural context | \\nMedium to large | \\nGood for teams that value planning, ownership, and structure | \\nHelps track tech debt, aids continuity | \\nDocument major decisions, not just how-tos | \\n
Tech lead office hours | \\nProviding structured mentoring and support | \\nMedium | \\nBest for async or remote teams with a culture of accessibility | \\nEncourages accessibility, reduces interruptions | \\nTimebox wisely; rotate responsibilities if needed | \\n
Over time, you’ll build a knowledge-sharing system that not only supports current teammates but also sets up future ones for success.
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nModern software development hinges on building applications where clients and servers communicate efficiently. Two standout approaches for building APIs are gRPC and REST. While REST has been the go-to standard for years, gRPC has emerged as serious competition for many use cases.
\\nLet’s compare these technologies in terms of protocol, performance characteristics, and key decision factors to help you make the right choice for your project.
\\nWe’ll dive deep into both approaches, but if you’re looking for a quick answer:
\\nNow, let’s explore the gRPC vs. REST comparison in detail.
\\nREST is an architectural style that handles resource manipulation using HTTP methods (GET
, POST
, PUT
, DELETE
). It typically encodes data using JSON or XML.
GET /api/users/123 HTTP/1.1\\nHost: example.com\\nAccept: application/json\\n\\n
{\\n \\"id\\": 123,\\n \\"name\\": \\"John Smith\\",\\n \\"email\\": \\"[email protected]\\",\\n \\"created_at\\": \\"2025-01-15T08:30:00Z\\"\\n}\\n\\n
gRPC is Google’s high-performance RPC framework. It uses HTTP/2 as its transport protocol and Protocol Buffers (protobuf) for serialization.
\\nsyntax = \\"proto3\\";\\n\\nservice UserService {\\n rpc GetUser(GetUserRequest) returns (User) {}\\n rpc ListUsers(ListUsersRequest) returns (stream User) {}\\n rpc UpdateUser(UpdateUserRequest) returns (User) {}\\n}\\n\\nmessage GetUserRequest {\\n int32 user_id = 1;\\n}\\n\\nmessage User {\\n int32 id = 1;\\n string name = 2;\\n string email = 3;\\n string created_at = 4;\\n}\\n\\nmessage ListUsersRequest {\\n int32 page_size = 1;\\n string page_token = 2;\\n}\\n\\nmessage UpdateUserRequest {\\n User user = 1;\\n}\\n\\n
One of gRPC’s most significant technical advantages is its use of HTTP/2 as a transport protocol. Understanding the technical details of HTTP/2 helps explain why gRPC offers substantial performance benefits.
\\nHTTP/2 introduces a binary framing layer that fundamentally changes how clients and servers exchange data, unlike HTTP/1.1’s text-based protocol.
\\nOne of HTTP/2’s biggest features is true multiplexing, which allows multiple request and response messages to travel simultaneously over the same TCP connection.
\\nIn REST over HTTP/1.1, browsers create six to eight TCP connections to achieve pseudo-parallelism, which can’t match HTTP/2’s multiplexing optimization.
\\nHTTP/2 uses HPACK, a specialized compression algorithm designed specifically for HTTP headers. This means it:
\\nHPACK can reduce header size by 80-90%, which is especially beneficial for use cases with many small requests or mobile clients operating in limited-bandwidth conditions.
\\nHTTP/2 allows clients to specify dependencies between streams and assign weights to them.
\\n\\nREST | \\ngRPC | \\n
---|---|
Uses text-based formats (JSON/XML) | \\nUses binary Protocol Buffers | \\n
Larger payload sizes due to text encoding | \\nTypically 30-40% smaller payloads compared to JSON | \\n
Human-readable but less efficient | \\nFaster serialization/deserialization | \\n
Serialization/deserialization can be CPU-intensive for large payloads | \\nReduced network bandwidth usage | \\n
REST | \\ngRPC | \\n
---|---|
Primarily uses HTTP/1.1 (though HTTP/2 is possible) | \\nBuilt on HTTP/2 | \\n
One request-response cycle per TCP connection | \\nMultiplexing multiple requests over a single connection | \\n
Higher latency with multiple requests | \\nBidirectional streaming reduces latency | \\n
Limited connection reuse | \\nPersistent connections improve performance | \\n
Header compression is possible in REST APIs, primarily through the use of HTTP/2’s header compression techniques. | \\nHeader compression reduces overhead | \\n
In typical scenarios, gRPC outperforms REST in several metrics:
\\nExample REST-based system:
\\nExample gRPC-based system:
\\nconst express = require(\'express\');\\nconst app = express();\\napp.use(express.json());\\n\\n// User data store\\nconst users = {\\n 123: { id: 123, name: \\"John Smith\\", email: \\"[email protected]\\", created_at: \\"2025-01-15T08:30:00Z\\" }\\n};\\n\\n// GET endpoint to fetch a user\\napp.get(\'/api/users/:id\', (req, res) => {\\n const userId = parseInt(req.params.id);\\n const user = users[userId];\\n\\n if (!user) {\\n return res.status(404).json({ error: \\"User not found\\" });\\n }\\n\\n return res.json(user);\\n});\\n\\n// POST endpoint to create a user\\napp.post(\'/api/users\', (req, res) => {\\n const newUser = req.body;\\n const id = Object.keys(users).length + 1;\\n\\n users[id] = {\\n id,\\n ...newUser,\\n created_at: new Date().toISOString()\\n };\\n\\n return res.status(201).json(users[id]);\\n});\\n\\napp.listen(3000, () => {\\n console.log(\'REST API server running on port 3000\');\\n});\\n\\n
// user.proto file already defined as shown earlier\\n\\nconst grpc = require(\'@grpc/grpc-js\');\\nconst protoLoader = require(\'@grpc/proto-loader\');\\n\\n// Load protobuf\\nconst packageDefinition = protoLoader.loadSync(\'user.proto\', {\\n keepCase: true,\\n longs: String,\\n enums: String,\\n defaults: true,\\n oneofs: true\\n});\\n\\nconst userProto = grpc.loadPackageDefinition(packageDefinition);\\n\\n// User data store\\nconst users = {\\n 123: { id: 123, name: \\"John Smith\\", email: \\"[email protected]\\", created_at: \\"2025-01-15T08:30:00Z\\" }\\n};\\n\\n// Implement the service\\nconst server = new grpc.Server();\\nserver.addService(userProto.UserService.service, {\\n getUser: (call, callback) => {\\n const userId = call.request.user_id;\\n const user = users[userId];\\n\\n if (!user) {\\n return callback({ code: grpc.status.NOT_FOUND, message: \'User not found\' });\\n }\\n\\n callback(null, user);\\n },\\n\\n listUsers: (call) => {\\n // Implement streaming response\\n Object.values(users).forEach(user => {\\n call.write(user);\\n });\\n call.end();\\n },\\n\\n updateUser: (call, callback) => {\\n const updatedUser = call.request.user;\\n\\n if (!users[updatedUser.id]) {\\n return callback({ code: grpc.status.NOT_FOUND, message: \'User not found\' });\\n }\\n\\n users[updatedUser.id] = {\\n ...users[updatedUser.id],\\n ...updatedUser\\n };\\n\\n callback(null, users[updatedUser.id]);\\n }\\n});\\n\\nserver.bindAsync(\'0.0.0.0:50051\', grpc.ServerCredentials.createInsecure(), () => {\\n console.log(\'gRPC server running on port 50051\');\\n server.start();\\n});\\n\\n
Here are key factors to consider for your project:
\\nA hybrid approach works best for many systems:
\\nBoth API patterns have their place in modern software architecture.
\\nREST, with its simplicity and broad compatibility, remains the go-to choice for public APIs and browser-based applications.
\\ngRPC shines in performance-sensitive environments, microservices communication, and cases where strong typing and code generation are essential.
\\nHappy API building!
\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\naddTransitionType
API\\n useDeferredValue
\\n In this tutorial, we’ll explore the ViewTransition, addTransitionType,
and Activity APIs with hands-on guidance. Developers will learn what they can do with these new features in real-world projects.
The React team has released the long-awaited View Transitions and Activity APIs, which at the time of writing this tutorial are still experimental.
\\nBefore now, the View Transitions API, which makes it easier to add animations to elements in the DOM with less JavaScript and CSS code when transitioning between pages in web apps, was only available as a native browser API. To optimize the performance of these Animations, the React team further improved on the native View Transitions API to support its virtual DOM.
\\nThe new React Activity API offers a more performant approach to pre-render or visually hide parts of your UI while preserving their state.
\\nTo follow along with this tutorial, you should have:
\\nThe React View Transition API takes care of view transitions behind the scenes. With it, you won’t need to directly interact with the native view transition API, like manually calling the document.startViewTransition()
method. It applies a view-transition-name
to the closest DOM node inside the <ViewTransition>
component, and if the DOM node has sibling nodes, it ensures each gets a unique name.
If a transition is already running, React waits for it to finish before executing another. And if multiple transition updates happen while a transition is in progress, React batches them into a single transition from the current to the latest state.
\\nWhen a transition starts, React runs lifecycle methods like getSnapshotBeforeUpdate
, applies DOM mutations, waits for things like fonts and navigation to finish, measures layout changes, and then figures out what it needs to animate.
After the transition is ready, React lets you hook into callbacks like onEnter
, onExit
, onUpdate
, and onShare
for manual control of the transitions.
One thing to watch for: if a flushSync
happens in the middle, React skips the transition because it needs to finish synchronously. Finally, React runs useEffect
after the animation is done, unless another update forces it to run earlier to keep everything in order.
The React team occasionally introduces experimental features for early testing and feedback from developers before adding them to the stable release. To explore these experimental features, you need to opt into an experimental React build and configure your project based on React documentation or RFCs.
\\nAfter setting up your React project with Vite or Create React App, install the experimental versions to override the regular react
and react-dom
packages:
npm install react@experimental react-dom@experimental\\n\\n
Next, check package compatibility because not all third-party packages support experimental builds. The View Transitions and Activity experimental APIs do not require enabling any flag. Check out the React documentation or the RFCs for configuration guides to see which APIs require this.
\\nWith this configuration, you can explore the View Transitions and Activity APIs functionality and provide feedback ahead of their official releases.
\\nTo make this tutorial as practical as possible, we’ll work with an AirBnB clone project, exploring various use cases for the View Transitions and Activity APIs.
\\nThe project covers:
\\nClone the starter project to follow along with this tutorial:
\\nSince transitions between pages or views depend on the routing logic, to work with the React View Transitions API, you have to configure your routers to enable view transitions.
\\n\\nThe React View Transitions API supports three triggers (startTransition
, useDeferredValue
and Suspense
) for a View Transition.
In this section, we’ll introduce the startTransition
trigger:
startTransition(() => setState(...));\\n\\n
To trigger a view transition, add startTransition
to your router config as follows:
import {createContext, useTransition} from \\"react\\";\\nconst RouterContext = createContext({ url: \\"/\\", params: {} });\\nexport function Router({ children }) {\\n const [isPending, startTransition] = useTransition();\\n function navigate(url) {\\n // Update router state in transition.\\n startTransition(() => {\\n go(url);\\n });\\n }\\n return (\\n <RouterContext\\n value={{\\n ...,\\n navigate\\n }}\\n >\\n {children}\\n </RouterContext>\\n )\\n}\\n\\n
The useTransition
Hook handles navigation as a low-priority update (non-blocking). When you call navigate(\\"/new-url\\")
, it triggers a transitioned navigation, then calls the go(URL)
function that updates the URL and router state during the transition.
For the full router configuration for this demo, check out the router.jsx
file.
Now you can add <ViewTransition>
to the App
component to animate between page transitions:
import {useRouter} from \'./router\';\\nimport \\"./App.css\\";\\nimport Listing from \\"./views/Listing\\";\\nimport Home from \'./views/Home\';\\nfunction App() {\\n const { url } = useRouter();\\n return (\\n <ViewTransition>\\n {url === \\"/\\" ? <Home /> : <Listing/>}\\n </ViewTransition>\\n )\\n}\\n\\n
Run the app, and you’ll notice the subtle cross-fade animation on page transition between the home page and the listing page.
\\nCustomizing the default animations in view transition is as easy as adding the default prop to the <ViewTransition>
component and setting its value to the transition class (CSS class name(s)) applied by React during the transition:
<ViewTransition default=\\"transition-classname\\">\\n {url === \\"/\\" ? <Home /> : <Listing/>}\\n</ViewTransition>\\n\\n
Then define the transition-classname
in CSS to control the page transitions using traditional CSS:
::view-transition-old(.transition-classname) {\\n animation-duration: 1000ms;\\n}\\n::view-transition-new(.transition-classname) {\\n animation-duration: 1000ms;\\n}\\n\\n
The transition class includes: slide-in
, slide-out
, fade-in
, fade-out
etc.
With this, you can customize the view transition’s default cross-fade animation.
\\nUpdate App.js
with the following:
...\\nimport {unstable_ViewTransition as ViewTransition} from \'react\'; \\n\\nfunction App() {\\n const { url } = useRouter();\\n return (\\n <ViewTransition default=\\"slow-fade\\">\\n {url === \\"/\\" ? <Home /> : <Listing />}\\n </ViewTransition>\\n );\\n}\\n\\n
Then add the following to App.css
:
::view-transition-old(.slow-fade) {\\n animation-duration: 1000ms;\\n}\\n::view-transition-new(.slow-fade) {\\n animation-duration: 1000ms;\\n}\\n\\n
Run the app, and you’ll see that the cross fade is slower:
\\nThe typical use case for a shared element transition is a thumbnail image on our home page transitioning into a full-width listing image on the listing details page.
\\n\\nTo implement this, add a unique name
to the <ViewTransition>
. Update the Thumbnail
component as follows:
import { unstable_ViewTransition as ViewTransition } from \\"react\\"; \\n\\nexport function Thumbnail({ listing, children }) {\\n return (\\n <ViewTransition name={`listing-${listing.id}`}>\\n {children}\\n </ViewTransition>\\n );\\n}\\n\\n
This adds a unique name to animate with a shared element transition. When React detects that a <ViewTransition>
with a specific name is removed and a new <ViewTransition>
with the same name is added, it automatically triggers a shared element transition between them:
addTransitionType
APIReact’s View Transition API supports animating based on the cause of the transition. With this, you can use the addTransitionType
API to specify the cause of a transition.
Add addTransitionType
to the startTransition
trigger:
startTransition(() => {\\n addTransitionType(\'nav-forward\');\\n go(url);\\n});\\n\\n
This sets the cause of transition to nav-forward
. Now you can update the <ViewTransition>
\\ncomponent with the following:
<ViewTransition\\n name=\\"nav\\"\\n share={{\\n \'nav-forward\': \'slide-forward\',\\n }}>\\n ...\\n</ViewTransition>\\n\\n
React will apply the slide-forward
transition class to animate the <ViewTransition>
based on the nav-forward
transition type.
To see this in practice, update the navigate
and navigateBack
methods in router.js
with the following:
function navigate(url) {\\n startTransition(() => {\\n addTransitionType(\'nav-forward\');\\n go(url);\\n });\\n}\\nfunction navigateBack(url) {\\n startTransition(() => {\\n addTransitionType(\'nav-back\');\\n go(url);\\n });\\n}\\n\\n
Wrap {heading}
prop in Layout.jsx
with the following:
<ViewTransition\\n name=\\"nav\\"\\n share={{\\n \'nav-forward\': \'slide-forward\',\\n \'nav-back\': \'slide-back\',\\n }}>\\n {heading}\\n</ViewTransition>\\n\\n
Then define the \'slide-forward\'
and \'slide-back\'
transition classes in App.css
as follows:
/* Animations for view transition classed added by transition type */\\n::view-transition-old(.slide-forward) {\\n /* when sliding forward, the \\"old\\" page should slide out to left. */\\n animation: 150ms cubic-bezier(0.4, 0, 1, 1) both fade-out,\\n 400ms cubic-bezier(0.4, 0, 0.2, 1) both slide-to-left;\\n}\\n::view-transition-new(.slide-forward) {\\n /* when sliding forward, the \\"new\\" page should slide in from right. */\\n animation: 210ms cubic-bezier(0, 0, 0.2, 1) 150ms both fade-in,\\n 400ms cubic-bezier(0.4, 0, 0.2, 1) both slide-from-right;\\n}\\n::view-transition-old(.slide-back) {\\n /* when sliding back, the \\"old\\" page should slide out to right. */\\n animation: 150ms cubic-bezier(0.4, 0, 1, 1) both fade-out,\\n 400ms cubic-bezier(0.4, 0, 0.2, 1) both slide-to-right;\\n}\\n::view-transition-new(.slide-back) {\\n /* when sliding back, the \\"new\\" page should slide in from left. */\\n animation: 210ms cubic-bezier(0, 0, 0.2, 1) 150ms both fade-in,\\n 400ms cubic-bezier(0.4, 0, 0.2, 1) both slide-from-left;\\n}\\n/* New keyframes to support our animations above. */\\n@keyframes fade-in {\\n from {\\n opacity: 0;\\n }\\n}\\n@keyframes fade-out {\\n to {\\n opacity: 0;\\n }\\n}\\n@keyframes slide-to-right {\\n to {\\n transform: translateX(50px);\\n }\\n}\\n@keyframes slide-from-right {\\n from {\\n transform: translateX(50px);\\n }\\n to {\\n transform: translateX(0);\\n }\\n}\\n@keyframes slide-to-left {\\n to {\\n transform: translateX(-50px);\\n }\\n}\\n@keyframes slide-from-left {\\n from {\\n transform: translateX(-50px);\\n }\\n to {\\n transform: translateX(0);\\n }\\n}\\n\\n
This allows the name of the property in the listing detail page to slide in from the right upon entering the page. The number of listed properties in the Home page slides in from the left upon returning to the page:
\\nMake sure to use unique name props on the <ViewTransition>
component to avoid the following error:
In the router section, we mentioned Suspense as one of the React View Transitions API supported triggers. In this section, we’ll explore animating the suspense boundaries with the Suspense
trigger.
To implement this, wrap the Suspense
component with <ViewTranstion>
:
<ViewTransition>\\n <Suspense fallback={<ReservationFallback />}>\\n <Reservation id={listing.id} />\\n </Suspense>\\n</ViewTransition>\\n\\n
You can also animate the Suspense
fallback and content individually for a more granular animation experience.
Update the Suspense
in Listing/index.jsx
with the following:
import React, { Suspense, unstable_ViewTransition as ViewTransition } from \\"react\\";\\nconst Listing = ({listing}) => {\\n return (\\n <div>\\n <ViewTransition default=\\"slow-fade\\">\\n <Suspense fallback={<ViewTransition exit=\\"slide-down\\"><ReservationFallback /></ViewTransition>}>\\n <ViewTransition enter=\\"slide-up\\">\\n <Reservation id={listing.id} />\\n </ViewTransition>\\n </Suspense>\\n </ViewTransition>\\n </div>\\n )\\n}\\n\\n
Add the slide-down
and slide-up
transition classes to App.css
:
/* Slide the fallback down */\\n::view-transition-old(.slide-down) {\\n animation: 150ms ease-out both fade-out, 150ms ease-out both slide-down;\\n}\\n\\n/* Slide the content up */\\n::view-transition-new(.slide-up) {\\n animation: 210ms ease-in 150ms both fade-in, 400ms ease-in both slide-up;\\n}\\n\\n/* Define the new keyframes */\\n@keyframes slide-up {\\n from {\\n transform: translateY(10px);\\n }\\n to {\\n transform: translateY(0);\\n }\\n}\\n\\n@keyframes slide-down {\\n from {\\n transform: translateY(0);\\n }\\n to {\\n transform: translateY(10px);\\n }\\n}\\n\\n
This will slide the Suspense
fallback down and slide the content up:
useDeferredValue
We also mentioned useDeferredValue
as one of the React View Transitions API supported triggers. In this section, we’ll explore Triggering Animation with the useDeferredValue
trigger.
Let’s consider the use case of animating filtered or re-ordered elements from a list:
\\nconst [searchText, setSearchText] = useState(\\"\\");\\nconst deferredSearchText = useDeferredValue(searchText);\\nconst foundListings = filterListings(listings, deferredSearchText);\\n\\n
Then wrap the component that depends on foundListings
with <ViewTransition>
:
<ViewTransition>\\n <Cards list={foundListings} />\\n</ViewTransition>\\n\\n
To see this in practice, update the Home component in Home/index.jsx
with the following:
export default function Home() {\\n const listings = use(fetchListings());\\n const count = listings.length;\\n const [searchText, setSearchText] = useState(\\"\\");\\n const deferredSearchText = useDeferredValue(searchText);\\n const foundListings = filterListings(listings, deferredSearchText);\\n return (\\n <Layout heading={<p className=\\"section-1__title\\">{count} Listings</p>}>\\n <Filter />\\n <SearchInput value={searchText} onChange={setSearchText} />\\n <div className=\\"listing-list\\">\\n {foundListings.length === 0 && (\\n <div className=\\"no-results\\">No results</div>\\n )}\\n <div className=\\"listings\\">\\n <ViewTransition>\\n <Cards list={foundListings} />\\n </ViewTransition>\\n </div>\\n </div>\\n </Layout>\\n );\\n}\\n\\n
Now, you should notice the animation while searching for a property listing on the Home page.
\\nThe new React Activity API offers a more performant approach to pre-render or visually hide parts of the UI while preserving their state, compared to the performance costs of unmounting or hiding with CSS.
\\nThe applicable use case for the Activity API includes saving state for parts of the UI the user isn’t using and pre-rendering parts of the UI that the user is likely to use next.
\\nWith the current implementation of the demo app, when the user enters a value in the search field and navigates to the listing detail page, the value in the search field disappears once the user returns to the Home page.
\\nTo ensure that this value is persisted upon leaving the Home page, wrap the Home
component in App.jsx
with <Activity>
as follows:
<Activity mode={url === \'/\' ? \'visible\' : \'hidden\'}>\\n <Home />\\n</Activity>\\n\\n
To pre-render parts of the UI that the user is likely to visit next, update App.jsx with the following:
\\nfunction App() {\\n const { url } = useRouter();\\n const listingId = url.split(\\"/\\").pop();\\n const listings = use(fetchListings());\\n\\n return (\\n <ViewTransition default=\\"slow-fade\\">\\n ...\\n {listings.map((listing) => (\\n <Activity key={listing.id} mode={Number(listingId) === listing.id ? \'visible\' : \'hidden\'}>\\n <Listing listing={listing}/>\\n </Activity>\\n ))}\\n </ViewTransition>\\n );\\n}\\nexport default App;\\n\\n
This pre-renders the Listing
component for all the listing items. Once the listing detail page is visited and the listing id
matches the listingId popped
from the URL, the Listing
component renders completely.
Update the Listing
component in Listing/index.jsx
to receive the listing
prop:
const Listing = ({listing}) => {\\n const { url, navigateBack } = useRouter();\\n return (...)\\n}\\n\\n
With the pre-render implementation, the Suspense
component will animate and render immediately without the fallback.
Here is what the final build looks like:
\\nYou can also find the code for the final build on GitHub.
\\nIn this tutorial, we explored the new React View Transitions, addTransitionType
, and Activity API with hands-on examples in a real-world application. We also covered animating elements on page transitions, animating a shared element, animating the reorder of items in a list, animating from Suspense content, customizing animations, pre-rendering, and visually hiding parts of your UI while preserving their state.
Keep in mind that experimental features can change or be removed at any time. It’s best to avoid them in production as they can break your app in future updates. Use them only in development environments, and always check the release notes for breaking changes.
\\nIf you encounter any issues while following this tutorial or need expert help with web/mobile development, don’t hesitate to reach out on LinkedIn. I’d love to connect and am always happy to assist!
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nWhen the TypeScript team announced they were rewriting the compiler in Go, it was framed as a pragmatic shift driven by performance. Many applauded the decision, including some of my LogRocket author colleagues in their article, TypeScript is Going Go: Why It’s the Pragmatic Choice. But now, with two months of hands-on experience with the Go-powered compiler, it’s time to reflect on the other side of the story: one that’s less about benchmarks and more about the broader developer ecosystem, tooling disruption, and long-term maintainability.
\\nLet’s start with the obvious: yes, Go is fast. The new compiler is reportedly up to 10× faster in some scenarios. Developers using large codebases in VS Code are seeing faster feedback loops. CI pipelines are shaving seconds to minutes off their runs. These are measurable, valuable wins.
\\nBut performance alone doesn’t make a language choice pragmatic.
TypeScript didn’t evolve in isolation. It was born out of and thrives in an ecosystem deeply tied to JavaScript and its tooling. This ecosystem is full of tools like ts-loader
, ts-blank-space
, custom linters, and AST transformers that depend on long-stable, accessible JavaScript/TypeScript APIs. These tools now face an uncertain future.
Many internal compiler APIs are being rewritten in Go. Some won’t make the cut at all. While the core TypeScript CLI and language services will continue to function, any project that deeply integrates with the compiler internals will likely break or require costly rewrites.
\\nType-checking loaders, custom compilers, and advanced plugin systems are suddenly brittle or obsolete.
\\nThis is more than technical debt; it’s ecosystem debt.
\\nIf performance were the driver, why not choose Rust? Tools like SWC and Deno have shown Rust’s ability to deliver both speed and safety in the JS ecosystem. And unlike Go, Rust is already the default language for many modern TypeScript-adjacent projects.
\\nYes, Rust has a steeper learning curve and lacks a garbage collector. But these challenges come with a payoff: control, memory safety, and the ability to write highly modular, embeddable components. In contrast, Go’s simplistic type system and opinionated concurrency model make it great for monolithic systems, not language tooling that thrives on extensibility.
\\nIn short: Rust is harder to port to, but easier to build on. Go is easier to port to, but harder to innovate on.
\\nBeyond the ecosystem breakage and missed opportunity to embrace Rust, the decision to port TypeScript to Go introduces long-term tradeoffs that are already becoming apparent:
\\nWhile Go 1.18 finally introduced generics after years of community requests, its type system remains far less expressive than TypeScript’s.
\\nThere’s no concept of higher-kinded types, no type inference as powerful as what TypeScript supports, and constrained interfaces often require verbose boilerplate. For a project that revolves entirely around types — and complex ones at that — this is a fundamental mismatch that could slow down future enhancements or make them harder to implement cleanly.
\\nBuilding compilers and static analyzers typically requires a strong reflection system, AST-level manipulation, and introspective APIs. Rust and even JavaScript offer more usefulness in this space.
\\nTypeScript has built one of the most active language communities in recent history. Most contributors to the TypeScript compiler understand JavaScript or TypeScript deeply, but Go is an entirely different ecosystem with different idioms, tooling, and mental models.
\\nMoving the core compiler to Go introduces a language barrier, increasing friction for contributors and making onboarding more difficult. This could lead to a smaller pool of maintainers, longer feedback cycles, and fewer community-driven improvements.
\\nThe original LogRocket post briefly mentioned this concern: the TypeScript team will now write more Go and less TypeScript. This might sound like a non-issue. After all, the language isn’t changing, right?
\\nBut writing the language you build is how many of the best DX decisions are made. It’s how bugs are caught early, friction is felt firsthand, and features are shaped by real use. Moving compiler work away from TypeScript breaks that feedback loop.
\\n\\nOver time, this could widen the gap between how TypeScript is built and how it’s used.
\\nWhile the Go port brings massive benefits on the CLI side (faster builds, better performance), it comes with long-term maintenance and developer experience risks that the TypeScript team will need to actively mitigate.
\\nIronically, the very pragmatism that TypeScript was built on is now at risk due to reduced community accessibility.
\\nThe TypeScript team has stated that the CLI and language server will retain “close compatibility,” but early signs show that’s a moving target. For instance:
\\ntranspileModule
compatibility is uncertaints-loader
likely won’t support type checking mode anymore, a significant loss for webpack usersThese are not minor implementation details. They’re breaking changes at the heart of the TypeScript development workflow.
\\nUnsurprisingly, many in the web development world were disappointed that the TypeScript team chose Go over Rust. There’s a large, if not majority, portion of the community that adores Rust, and in some corners, it feels like there’s a push to have everything rewritten in it.
\\nAnd to be fair, valid concerns and thoughtful arguments are being raised in these discussions.
\\nFor example, Evan You, the creator of Vue.js and now involved in Rust-based tooling, publicly voiced concern about Go’s performance in WebAssembly.
His point is critical: web-based editors, playgrounds, and development tools often need to run the TypeScript compiler inside the browser, which effectively means running it as a WASM module:
\\nIf Go-based TypeScript performs poorly in that WASM context, it could significantly limit its usefulness in those browser-contained environments. In fact, in his testing, even simple typechecking tasks in Go-WASM were slower than existing JavaScript-based implementations, a worrying datapoint for those invested in web IDE performance.
\\nSo while Go may be the right tool for the server-side CLI and IDE tooling, Rust could’ve had the upper hand in all use cases, including browser-based use cases, if the team had the resources and time to do a full rewrite.
\\nTypeScript’s success has always been about more than performance. It’s been about the ecosystem: DefinitelyTyped, tight IDE integration, predictable APIs, and a vibrant community of contributors and tool authors.
\\nThe switch to Go may be a pragmatic move in the short term, but it risks alienating the very developers who built the tools that made TypeScript indispensable in the first place.
\\nIf pragmatism is the guiding principle, then perhaps we need to zoom out. Is it truly pragmatic to chase performance at the expense of ecosystem continuity, contributor inclusivity, and long-term extensibility?
\\nThe Go compiler for TypeScript is a bold move. It brings real speed gains, and those will benefit many developers. But two months in, it’s clear that the costs are deeper than expected. Go, while easy to work with for a small core team, imposes limitations that TypeScript and its ecosystem will feel for years to come.
\\nIf the TypeScript team had chosen Rust, the path might have been steeper up front, but the payoff in safety, performance, and ecosystem alignment would have been stronger and more future-proof.
\\nGo was a pragmatic choice if pragmatism only means “easy to build and fast to run.” But TypeScript has always stood for more than that.
\\nNow, the question is whether the community will accept these trade-offs or start building their compilers in Rust anyway.
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nas
operator\\n unknown
, never
, and any
Types in TypeScript?\\n as
operator\\n as
casting\\n satisfies
operator\\n String()
\\n TypeScript casting is a practical way to fix frustrating type errors and safely work with unknown data like JSON responses or form inputs. In this guide, we’ll cover the basics and advanced use cases of type casting and provide some clarity on casting vs. assertion.
\\nTypeScript’s robust type system lets developers define and enforce types for variables, function parameters, return values, and more. Essentially, it does static type checking, and this helps catch and prevent many errors before the code even runs. But let’s be honest, type-related issues can still surprise us, especially when dealing with data that’s unpredictable.
\\nSo maybe you’re parsing a JSON response from an API or handling user input from a form, and TypeScript isn’t sure what type it is. The compiler might throw an error, and you feel stuck.
\\nThat’s where TypeScript’s casting feature comes in. It resolves these kinds of issues by explicitly telling TypeScript what a type value should be, allowing you to silence confusing type errors and guide the compiler in the right direction. In TypeScript, this process is technically called type assertion, though many developers use the terms “type casting” and “type assertion” interchangeably in everyday coding discussions.
\\nCasting is especially useful with dynamic data or when TypeScript’s inference falls short. In this article, we’ll dive into casting in TypeScript, showing you how and why to use it to fix type mismatches. To follow along, you should have a working knowledge of TypeScript and object-oriented programming.
\\nIn TypeScript, casting is a way for developers to tell the compiler to treat a value as a specific type, overriding the inferred type system when the developer has more information than the compiler.
\\nType casting can happen in one of two ways: it can be implicit, which is when TypeScript handles the operation, or explicit, when the developer handles the conversion. Implicit casting occurs when TypeScript sees a type error and attempts to safely correct it.
\\nType casting is essential for performing various operations, including mathematical calculations, data manipulation, and compatibility checks. But before you can start using type casting effectively, you’ll need to understand some foundational concepts like subtype and supertype relationships, type widening, and type narrowing.
\\nEditor’s note: This article was updated by Nelson Michael in May 2025 to clarify casting vs. assertion, expand examples with real-world use cases of type casting, and answer some commonly asked questions.
\\nWhile these two terms are often used interchangeably amongst developers, there is a subtle difference between type assertion and type casting in TypeScript:
\\nas
keyword, like we’ll see belowString()
, Number()
, Boolean()
, etc.The key difference is that type assertion is purely a compile-time construct — it tells TypeScript to treat a value as a certain type without affecting its runtime behavior. Type casting, on the other hand, actually transforms the data and can affect runtime behavior.
\\nOne way to classify types is to split them into sub- and supertypes. Generally, a subtype is a specialized version of a supertype that inherits the supertype’s attributes and behaviors. A supertype, on the other hand, is a more general type that is the basis of multiple subtypes.
\\nConsider a scenario where you have a class hierarchy with a superclass called Animal
and two subclasses named Cat
and Dog
. Here, Animal
is the supertype, while Cat
and Dog
are the subtypes. Type casting comes in handy when you need to treat an object of a particular subtype as its supertype or vice versa.
Type widening, or upcasting, occurs when you need to convert a variable from a subtype to a supertype. Type widening is usually implicit, meaning that it is performed by TypeScript, because it involves moving from a narrow category to a broader one. Type widening is safe, and it won’t cause any errors because a subtype inherently possesses all the attributes and behaviors of its supertype.
\\nType narrowing, or downcasting, occurs when you convert a variable from a supertype to a subtype. Type narrowing conversion is explicit and requires a type assertion or a type check to ensure the validity of the conversion. This process can be risky because not all supertype variables hold values that are compatible with the subtype.
\\nas
operatorThe as
operator is TypeScript’s primary mechanism for explicit type casting. With its intuitive syntax, as
allows you to inform the compiler about the intended type of a variable or expression.
\\nBelow is the general form of the as
operator:
value as Type\\n\\n
Here, value
represents the variable or expression you can cast, while Type
denotes the desired target type. By using as
, you explicitly assert that value
is of type Type
.
The as
operator is useful when you’re working with types that have a common ancestor, including class hierarchies or interface implementations. It allows you to indicate that a particular variable should be treated as a more specific subtype. Here’s some code to illustrate:
class Animal {\\n eat(): void {\\n console.log(\'Eating...\');\\n }\\n}\\n\\nclass Dog extends Animal {\\n bark(): void {\\n console.log(\'Woof!\');\\n }\\n}\\n\\nconst animal: Animal = new Dog();\\nconst dog = animal as Dog;\\ndog.bark(); // Output: \\"Woof!\\"\\n\\n
In this code, the Dog
class extends the Animal
class. The Dog
instance is assigned to a variable animal
of type Animal
. By using the as
operator, you cast animal
as Dog
, allowing you to access the bark()
method specific to the Dog
class. The code should output this:
You can use the as
operator to cast to specific types. This capability comes in handy when you need to interact with a type that differs from the one inferred by TypeScript’s type inference system. Here’s an example:
function getLength(obj: any): number {\\n if (typeof obj === \'string\') {\\n return (obj as string).length;\\n } else if (Array.isArray(obj)) {\\n return (obj as any[]).length;\\n }\\n return 0;\\n}\\n\\n
The getLength
function accepts a parameter obj
of type any
. In the getLength
function, the as
operator casts obj
to a string for any[]
based on its type. This operation gives you access to the length
property specific to strings or arrays, respectively.
Additionally, you can cast to a union type to express that a value can be one of several types:
\\nfunction processValue(value: string | number): void {\\n if (typeof value === \'string\') {\\n console.log((value as string).toUpperCase());\\n } else {\\n console.log((value as number).toFixed(2));\\n }\\n}\\n\\n
The processValue
function accepts a parameter value
of type string | number
, indicating that it can be a string or a number. By using the as
operator, you cast value
to string
or number
within the respective conditions, allowing you to apply type-specific operations such as toUpperCase()
or toFixed()
.
Do not throw type casting at the smallest error! Type casting is a powerful feature, but you shouldn’t use it casually. It should be applied thoughtfully, usually when you’re confident about the data’s shape, but TypeScript isn’t. Here are some good use cases:
\\nany
In these scenarios, casting helps with TypeScript’s static type system and real-world, often unpredictable, data.
\\nDOM API interactions often require casting because TypeScript can’t determine element types precisely:
\\n// Simple event handling with casting\\ndocument.querySelector(\'#loginForm\')?.addEventListener(\'submit\', (event) => {\\n event.preventDefault();\\n\\n // TypeScript doesn\'t know this is a form element\\n const form = event.target as HTMLFormElement;\\n\\n // Access form elements\\n const emailInput = form.elements.namedItem(\'email\') as HTMLInputElement;\\n const passwordInput = form.elements.namedItem(\'password\') as HTMLInputElement;\\n\\n const credentials = {\\n email: emailInput.value,\\n password: passwordInput.value\\n };\\n\\n // Process login...\\n});\\n\\n// Another common DOM casting scenario\\nfunction handleButtonClick(event: MouseEvent) {\\n const button = event.currentTarget as HTMLButtonElement;\\n const dataId = button.dataset.id; // TypeScript now knows about dataset\\n\\n // Load data based on the button\'s data attribute\\n loadItemDetails(dataId);\\n}\\n\\n
unknown
, never
, and any
Types in TypeScript?TypeScript provides special types that sometimes require special handling with type assertions.
\\nThe unknown
type is safer than any
because it forces you to perform type checking before using the value:
function processValue(value: unknown) {\\n // Error: Object is of type \'unknown\'\\n // return value.length;\\n\\n // Correct: Using type checking first\\n if (typeof value === \'string\') {\\n return value.length; // TypeScript knows it\'s a string\\n }\\n\\n // Alternative: Using type assertion (less safe)\\n return (value as string).length; // Works but risky\\n}\\n\\n// A safer pattern with unknown\\nfunction parseConfig(config: unknown): { apiKey: string; timeout: number } {\\n // Validate before asserting\\n if (\\n typeof config === \'object\' && \\n config !== null &&\\n \'apiKey\' in config && \\n \'timeout\' in config\\n ) {\\n // Now we can safely cast\\n return config as { apiKey: string; timeout: number };\\n }\\n\\n throw new Error(\'Invalid configuration\');\\n}\\n\\n
The never
type represents values that never occur. It’s useful for exhaustiveness checking:
type Shape = Circle | Square | Triangle;\\n\\ninterface Circle {\\n kind: \'circle\';\\n radius: number;\\n}\\n\\ninterface Square {\\n kind: \'square\';\\n sideLength: number;\\n}\\n\\ninterface Triangle {\\n kind: \'triangle\';\\n base: number;\\n height: number;\\n}\\n\\nfunction getArea(shape: Shape): number {\\n switch (shape.kind) {\\n case \'circle\':\\n return Math.PI * shape.radius ** 2;\\n case \'square\':\\n return shape.sideLength ** 2;\\n case \'triangle\':\\n return (shape.base * shape.height) / 2;\\n default:\\n // This ensures we\'ve handled all cases\\n const exhaustiveCheck: never = shape;\\n return exhaustiveCheck;\\n }\\n}\\n\\n// If we add a new shape type but forget to update getArea:\\n// interface Rectangle { kind: \'rectangle\', width: number, height: number }\\n// The function will have a compile error at the exhaustiveCheck line\\n\\n
When dealing with any
, gradual typing can help improve safety:
// External API returns any\\nfunction externalApiCall(): any {\\n return { success: true, data: [1, 2, 3] };\\n}\\n\\n// Safely handle the response\\nfunction processApiResponse() {\\n const response = externalApiCall();\\n\\n // Check structure before casting\\n if (\\n typeof response === \'object\' && \\n response !== null && \\n \'success\' in response && \\n \'data\' in response && \\n Array.isArray(response.data)\\n ) {\\n // Now we can safely cast\\n const typedResponse = response as { \\n success: boolean; \\n data: number[] \\n };\\n\\n return typedResponse.data.map(n => n * 2);\\n }\\n\\n throw new Error(\'Invalid API response\');\\n}\\n\\n
However, casting comes with risks. Since type assertions and casts override TypeScript’s type checking, incorrect assumptions can result in runtime errors. For example, if you cast a value to a type it doesn’t match, your code may compile, but crash at runtime. That’s why casting should be your last resort, not your first.
\\nInstead of jumping straight to casting, consider these alternatives:
\\nUse typeof
, instanceof
, or custom type guards to help TypeScript infer the correct type.
Without narrowing you might be tempted to cast:
\\nfunction handleInput(input: string | number){\\n const value = (input as number) + 1 // This is unsafe if input is actually a string\\n}\\n\\n
With type narrowing (safe and readable):
\\nfunction handleInput(input: string | number){\\n if (typeof input === \\"number\\"){\\n return input + 1; // safely inferred as number\\n }\\n\\n return parseInt(input, 10) + 1;\\n}\\n\\n
instanceof
function logDate(value: Date | string) {\\n if(value instanceof Date){\\n console.log(value.toISOString());\\n} else{\\n console.log(new Date(value).toISOString());\\n}\\n}\\n\\n
type Dog = { kind: \\"dog\\"; bark: () => void };\\ntype Cat = { kind: \\"cat\\"; meow: () => void };\\ntype Pet = Dog | Cat;\\n\\nfunction isDog(pet: Pet): pet is Dog {\\n return pet.kind === \\"dog\\";\\n}\\n\\nfunction handlePet(pet: Pet) {\\n if (isDog(pet)) {\\n pet.bark(); // safely treated as Dog\\n } else {\\n pet.meow(); // safely treated as Cat\\n }\\n}\\n\\n
When working with reusable functions or components, generics can preserve type safety without the need for casting.
\\nWithout generics (requires casting):
\\nfunction getFirst(arr: any): any {\\n return arr[0];\\n}\\n\\nconst name = getFirst([\\"Alice\\", \\"Bob\\"]) as string; // cast needed\\n\\n
function getFirst<T>(arr: T[]): T {\\n return arr[0];\\n}\\n\\nconst name = getFirst([\\"Alice\\", \\"Bob\\"]); // inferred as string\\nconst age = getFirst([1, 2, 3]); // inferred as number\\n\\n
Generics preserve the data type, so there’s no need for assertions.
\\nIf you can model your data accurately from the start (e.g., via interfaces, enums, or discriminated unions), you’ll rarely need to cast at all.
\\nconst userData = JSON.parse(\'{\\"id\\": 1, \\"name\\": \\"Jane\\"}\');\\nconst user = userData as { id: number; name: string }; // type cast needed\\n\\n
interface User {\\n id: number;\\n name: string;\\n}\\n\\nfunction parseUser(json: string): User {\\n const data = JSON.parse(json);\\n // Ideally validate `data` here before returning\\n return data; // if validated, no cast needed\\n}\\n\\n
Here’s a more robust JSON parsing example when working with API responses:
\\n// Define your expected type \\ninterface User { \\n id: number; \\n name: string; \\n email: string; \\n preferences: { \\n darkMode: boolean; \\n notifications: boolean; \\n }; \\n}\\n\\n// API response handling \\n\\nasync function fetchUser(userId: string): Promise<User> { \\n const response = await fetch(`/api/users/${userId}`); \\n const data = await response.json(); // TypeScript sees this as \'any\' \\n // Option 1: Type assertion when you\'re confident about the structure \\n return data as User; \\n\\n // Option 2: Better approach with validation (recommended) \\n if (isUser(data)) { // Using a type guard function \\n return data; // TypeScript now knows this is User \\n } throw new Error(\'Invalid user data received\'); \\n}\\n\\n// Type guard \\n function function isUser(data: any): data is User { \\n return ( \\n typeof data === \'object\' && \\n data !== null && \\n typeof data.id === \'number\' && \\n typeof data.name === \'string\' && \\n typeof data.email === \'string\' && \\n typeof data.preferences === \'object\' && \\n typeof data.preferences.darkMode === \'boolean\' && \\n typeof data.preferences.notifications === \'boolean\' \\n ); \\n}\\n\\n
type Response =\\n | { status: \\"success\\"; data: string }\\n | { status: \\"error\\"; message: string };\\n\\nfunction handleResponse(res: Response) {\\n if (res.status === \\"success\\") {\\n console.log(res.data); // safely inferred\\n } else {\\n console.error(res.message); // safely inferred\\n }\\n}\\n\\n
From a performance standpoint, casting has no cost; it exists purely at compile time. But safety and readability are still at stake. Overuse of as
can make your code brittle, hard to refactor, and confusing for future maintainers.
In short, cast only when:
\\nUse it wisely, document it clearly, and revisit it often — especially as your code evolves.
\\nas
operatorWhile the as
operator is a powerful tool for type casting in TypeScript, it has some limitations. One limitation is that as
operates purely at compile-time and does not perform any runtime checks. This means that if the casted type is incorrect, it may result in runtime errors. So, it is crucial to ensure the correctness of the type being cast.
Another limitation of the as
operator is that you can’t use it to cast between unrelated types. TypeScript’s type system provides strict checks to prevent unsafe casting, ensuring type safety throughout your codebase. In such cases, consider alternative approaches, such as type assertion functions or type guards.
as
castingThere are instances when TypeScript raises objections and refuses to grant permission for as
casting. Let’s look at some situations that might cause this.
TypeScript’s static type checking relies heavily on the structural compatibility of types, including custom types. When you try to cast a value with the as
operator, the compiler assesses the structural compatibility between the original type and the desired type.
If the structural properties of the two custom types are incompatible, TypeScript will raise an error, signaling that the casting operation is unsafe. Here’s an example of type casting with structural incompatibility errors using custom types:
\\ninterface Square {\\n sideLength: number;\\n}\\n\\ninterface Rectangle {\\n width: number;\\n height: number;\\n}\\n\\nconst square: Square = { sideLength: 5 };\\nconst rectangle = square as Rectangle; // Error: Incompatible types\\n\\n
TypeScript prevents the as
casting operation because the two custom types, Square
and Rectangle
, have different structural properties. Instead of relying on the as
operator casting, a safer approach would be to create a new instance of the desired type, and then manually assign the corresponding values.
Union types in TypeScript allow you to define a value that can be one of several possible types. Type guards play a crucial role in narrowing down the specific type of a value within a conditional block, enabling type-safe operations.
\\nHowever, when attempting to cast a union type with the as
operator, it is required that the desired type be one of the constituent types of the union. If the desired type is not included in the union, TypeScript won’t allow the casting operation:
type Shape = Square | Rectangle;\\n\\nfunction getArea(shape: Shape) {\\n if (\'sideLength\' in shape) {\\n // Type guard: \'sideLength\' property exists, so shape is of type Square\\n return shape.sideLength ** 2;\\n } else {\\n // shape is of type Rectangle\\n return shape.width * shape.height;\\n }\\n}\\n\\nconst square: Shape = { sideLength: 5 };\\nconst area = getArea(square); // Returns 25\\n\\n
In the above snippet, you have a union type Shape
that represents either a Square
or Rectangle
. The getArea
function takes a parameter of type Shape
and needs to calculate the area based on the specific shape.
To determine the type of shape
inside the getArea
function, we use a type guard. The type guard checks for the presence of the sideLength
property using the in
operator. If the sideLength
property exists, TypeScript narrows down the type of shape
to Square
within that conditional block, allowing us to access the sideLength
property safely.
Type assertions, denoted with the as
keyword, provide functionality for overriding the inferred or declared type of a value. However, TypeScript has certain limitations on type assertions. Specifically, TypeScript prohibits as
casting when narrowing a type through control flow analysis:
function processShape(shape: Shape) {\\n if (\\"width\\" in shape) {\\n const rectangle = shape as Rectangle;\\n // Process rectangle\\n } else {\\n const square = shape as Square;\\n // Process square\\n }\\n}\\n\\n
TypeScript will raise an error because it cannot narrow the type of shape
based on the type assertions. To overcome this limitation, you can introduce a new variable within each branch of the control flow:
function processShape(shape: Shape) {\\n if (\\"width\\" in shape) {\\n const\\n\\n rectangle: Rectangle = shape;\\n // Process rectangle\\n } else {\\n const square: Square = shape;\\n // Process square\\n }\\n}\\n\\n
By assigning the type assertion directly to a new variable, TypeScript can correctly infer the narrowed type.
\\nA discriminated union is a type that represents a value that can be of several possibilities. Discriminated unions combine a set of related types under a common parent, where each child type is uniquely identified by a discriminant property. This discriminant property serves as a literal type that allows TypeScript to perform exhaustiveness checking:
\\ntype Circle = {\\n kind: \'circle\';\\n radius: number;\\n};\\n\\ntype Square = {\\n kind: \'square\';\\n sideLength: number;\\n};\\n\\ntype Triangle = {\\n kind: \'triangle\';\\n base: number;\\n height: number;\\n};\\n\\ntype Shape = Circle | Square | Triangle;\\n\\n
You’ve defined three shape types: Circle
, Square
, and Triangle
, all collectively forming the discriminated union Shape
. The kind
property is the discriminator, with a literal value representing each shape type.
Discriminated unions become even more powerful when you combine them with type guards. A type guard is a runtime check that allows TypeScript to narrow down the possible types within the union based on the discriminant property.
\\nConsider this function that calculates the area of a shape:
\\nfunction calculateArea(shape: Shape): number {\\n switch (shape.kind) {\\n case \'circle\':\\n return Math.PI * shape.radius ** 2;\\n case \'square\':\\n return shape.sideLength ** 2;\\n case \'triangle\':\\n return (shape.base * shape.height) / 2;\\n default:\\n throw new Error(\'Invalid shape!\');\\n }\\n}\\n\\n
TypeScript leverages the discriminant property, kind
, in the switch
statement to perform exhaustiveness checking. If you accidentally omit a case, TypeScript will raise a compilation error, reminding you to handle all possible shape types.
You can use discriminated unions for type casting. Imagine a scenario where you have a generic response
object that can be one of two types: Success
or Failure
. You can use a discriminant property, status
, to differentiate between the two and perform type assertions accordingly:
type Success = {\\n status: \'success\';\\n data: unknown;\\n};\\n\\ntype Failure = {\\n status: \'failure\';\\n error: string;\\n};\\n\\ntype APIResponse = Success | Failure;\\n\\nfunction handleResponse(response: APIResponse) {\\n if (response.status === \'success\') {\\n // Type assertion: response is of type Success\\n console.log(response.data);\\n } else {\\n // Type assertion: response is of type Failure\\n console.error(response.error);\\n }\\n}\\n\\nconst successResponse: APIResponse = {\\n status: \'success\',\\n data: \'Some data\',\\n};\\n\\nconst failureResponse: APIResponse = {\\n status: \'failure\',\\n error: \'An error occurred\',\\n};\\n\\nhandleResponse(successResponse); // Logs: Some data\\nhandleResponse(failureResponse); // Logs: An error occurred\\n\\n
The status
property is the discriminator in the program above. TypeScript narrows down the type of the response
object based on the status
value, allowing you to safely access the respective properties without the need for explicit type checks:
satisfies
operatorThe satisfies
operator was introduced in TypeScript 4.9 to allow you to check whether an expression’s type matches another type without casting the expression. This can be useful for validating the types of your variables and expressions without changing their original types.
Here’s the syntax for using the satisfies
operator:
expression satisfies type\\n\\n
And here’s a program that checks if a variable is greater than five with the satisfies
operator:
const number = 10;\\nnumber satisfies number > 5;\\n\\n
The satisfies
operator will return true
if the expression’s type matches, and false
if otherwise. It’s a powerful tool for improving the type safety of your TypeScript code.
In data manipulation, you’ll always need to transform data from one type to another, and the two common transformations you will run into are casting a string to a number or converting a value to a string. Let’s look at how to approach each one.
\\nThere are several ways to cast a string to a number in TypeScript:
\\nUsing the Number()
function:
let numString: string = \'42\';\\nlet num: number = Number(numString);\\n\\n
Using the unary +
operator:
let numString: string = \'42\';\\nlet num: number = +numString;\\n\\n
Using parseInt()
or parseFloat()
:
let intString: string = \'42\';\\nlet int: number = parseInt(intString); \\n\\nlet floatString: string = \'3.14\';\\nlet float: number = parseFloat(floatString);\\nparseInt() and parseFloat() are more flexible as they allow extracting a number from a string that also includes non-numeric characters. Also, it is good to note that all of these methods will yield NaN (Not a Number) if the string cannot be parsed as a number.\\n\\n
String()
You can use the String()
function or the toString()
method to convert a value to a string in TypeScript:
let num: number = 42;\\nlet numString: string = String(num);\\n// or\\nlet numString2: string = num.toString();\\n\\nlet bool: boolean = true;\\nlet boolString: string = String(bool);\\n// or\\nlet boolString2: string = bool.toString();\\n\\n
Both String()
and toString()
work on essentially any type and convert it to a string representation.
toString() is a method on the object itself, while String() is a global function. In most cases, they will yield the same result, but toString() allows customizing the string representation by overriding the method on custom types:\\nclass CustomType {\\n value: number;\\n\\n constructor(value: number) {\\n this.value = value;\\n }\\n\\n toString() {\\n return `CustomType: ${this.value}`;\\n }\\n}\\n\\nlet custom = new CustomType(42);\\nconsole.log(String(custom)); // Output: [object Object]\\nconsole.log(custom.toString()); // Output: CustomType: 42\\n\\n
In the above snippet, String(custom)
doesn’t have any special behavior for our CustomType
, whereas custom.toString()
uses our custom implementation.
In this article, you learned about the various ways to perform type casting in TypeScript, including type assertion with the as
operator, type conversion using built-in methods like String()
, Number()
, and Boolean()
, and the subtle differences between type assertion and type casting.
You also learned about concepts like type guards and discriminated unions, which allow you to narrow down the specific type within a union type based on runtime checks or discriminant properties. With these techniques, you can efficiently improve the type safety of your programs and catch potential errors at compile time.
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nDate
object methods\\n Date
formatting methods\\n toLocaleDateString()
\\n JavaScript date handling presents challenges that can impact application reliability. This guide examines native Date API capabilities alongside specialized libraries, providing practical examples and performance metrics to inform your implementation decisions.
\\nYou’ll learn when to use built-in methods versus external libraries, how to properly handle localization and time zones, and how to avoid common date-related pitfalls in your projects.
\\nBefore formatting a date for display, you need to understand the format in which you receive it.
\\nHere are the three most common formats, each of which displays the same date and time:
\\nnew Date()
) — \\"2025-02-18T14:30:00.000Z\\"
1732561800000
Tue, 18 Feb 2025 14:30:00 +0000
Each format has its place depending on context. ISO 8601
is the most common for APIs and databases because it is standardized and easily parsed by new Date()
.
Since they’re just raw numbers, Unix timestamps
are great for calculations and comparisons. RFC 2822 is mostly seen in older systems or emails. Regardless of the format you start with, JavaScript’s Date
object is your primary tool for interpreting and working with these values.
Date
objectThe Date
object is JavaScript’s built-in way to work with dates and times. Here’s what you need to know:
// Creating a new Date object\\nconst now = new Date(); // Current date and time\\nconst isoDate = new Date(\'2025-02-18T14:30:00.000Z\'); // From date string\\nconst withComponents = new Date(2025, 1, 18); // Year, month (0-indexed!), day\\nconst timeStampDate = new Date(1732561800000)\\n\\n
\\nTip: JavaScript months are zero-indexed (0 = January, 11 = December).
The Date
object stores dates as milliseconds since Thursday, January 1, 1970 (Unix epoch), but provides methods to work with them in human-readable formats.
Date
object methodsNative methods let you extract parts of the date. For example:
\\nconst date = new Date(\'2025-02-18T14:30:15Z\');\\n\\n// Getting components\\ndate.getFullYear(); // 2025\\ndate.getMonth(); // 1 (February, zero-indexed!!!!)\\ndate.getDate(); // 18\\ndate.getHours(); // 14\\ndate.getMinutes(); // 30\\ndate.getSeconds(); // 15\\ndate.getDay(); // 2 (Tuesday, with 0 being Sunday)\\ndate.getTime(); // Milliseconds since epoch\\n\\n// Setting components\\ndate.setFullYear(2026);\\ndate.setMonth(5); // June (because zero-indexed!!!)\\n\\n
Date
formatting methodsNot every scenario calls for a full-featured library; sometimes it’s like using a sledgehammer to crack a nut. In many cases, JavaScript’s built-in date formatting methods are more than sufficient:
\\nconst date = new Date(\'2025-02-18T14:30:00Z\');\\n\\n// Basic string conversion\\ndate.toString(); \\n// \\"Tue Feb 18 2025 14:30:00 GMT+0000 (Coordinated Universal Time)\\"\\n\\n// Date portion only\\ndate.toDateString(); \\n// \\"Tue Feb 18 2025\\"\\n\\n// Time portion only\\ndate.toTimeString(); \\n// \\"14:30:00 GMT+0000 (Coordinated Universal Time)\\"\\n\\n// UTC version (reliable across timezones)\\ndate.toUTCString(); \\n// \\"Tue, 18 Feb 2025 14:30:00 GMT\\"\\n\\n// ISO 8601 format\\ndate.toISOString(); \\n// \\"2025-02-18T14:30:00.000Z\\"\\n\\n
These native methods provide a quick way to format dates without any extra dependencies. They’re perfect for simple use cases like displaying UTC values or splitting out the date or time.
\\ntoLocaleDateString()
const date = new Date(\'2025-02-18\');\\n\\n// Basic usage (uses browser\'s locale)\\ndate.toLocaleDateString(); \\n// In US: \\"2/18/2025\\"\\n// In UK: \\"18/02/2025\\"\\n// In Germany: \\"18.2.2025\\"\\n\\n// With explicit locale\\ndate.toLocaleDateString(\'fr-FR\'); \\n// \\"18/02/2025\\"\\n\\n// With options\\nconst options = { \\n weekday: \'long\', \\n year: \'numeric\', \\n month: \'long\', \\n day: \'numeric\' \\n};\\ndate.toLocaleDateString(\'de-DE\', options); \\n// \\"Dienstag, 18. Februar 2025\\"\\n\\n
No locales, no options:
\\ndate.toLocaleDateString();\\n// 2/18/2025\\n\\n
Sometimes you need to implement a custom date formatting solution. This approach gives you granular control over the output format and allows for optimization based on specific use cases:
\\nfunction formatDate(date, format) {\\n const day = String(date.getDate()).padStart(2, \'0\');\\n const month = String(date.getMonth() + 1).padStart(2, \'0\');\\n const year = date.getFullYear();\\n const hours = String(date.getHours()).padStart(2, \'0\');\\n const minutes = String(date.getMinutes()).padStart(2, \'0\');\\n const seconds = String(date.getSeconds()).padStart(2, \'0\');\\n\\n // Replace tokens with actual values\\n return format\\n .replace(\'YYYY\', year)\\n .replace(\'MM\', month)\\n .replace(\'DD\', day)\\n .replace(\'HH\', hours)\\n .replace(\'mm\', minutes)\\n .replace(\'ss\', seconds);\\n}\\n\\nconst date = new Date(\'2025-02-18T14:30:45Z\');\\nconsole.log(formatDate(date, \'YYYY-MM-DD\')); // \\"2025-02-18\\"\\nconsole.log(formatDate(date, \'DD/MM/YYYY HH:mm:ss\')); // \\"18/02/2025 14:30:45\\"\\n\\n
While this approach works, it quickly gets complex when you consider:
\\nFor anything beyond simple formats, it’s time to bring out the big guns.
\\nNative methods can be too limiting, especially if you have to deal with complex localization, custom formatting, or timezone manipulations. In such cases, popular libraries can help. Let’s look at some popular date formatting libraries.
\\nIn my opinion, date-fns is the best choice for modern applications.
\\ndate-fns is:
\\nLet’s see how date-fns
simplifies common tasks like parsing ISO strings into Date
objects, formatting them into readable strings, and performing date math like adding days or finding the difference between two dates. Its functional design keeps things clean, predictable, and easy to chain together:
import { format, parseISO, addDays, differenceInDays } from \'date-fns\';\\n\\n// Parsing\\nconst date = parseISO(\'2025-02-18T14:30:00Z\');\\n\\n// Formatting\\nformat(date, \'yyyy-MM-dd\'); // \\"2025-02-18\\"\\nformat(date, \'MMMM do, yyyy\'); // \\"February 18th, 2025\\"\\nformat(date, \'h:mm a\'); // \\"2:30 PM\\"\\nformat(date, \'EEEE, MMMM do, yyyy h:mm a\'); // \\"Tuesday, February 18th, 2025 2:30 PM\\"\\n\\n// Operations\\nconst nextWeek = addDays(date, 7);\\nconst daysBetween = differenceInDays(nextWeek, date); // 7\\n\\n
date-fns
provides robust localization support through separate locale imports. This modular approach keeps your bundle size minimal by only including the locales you need:
import { format, formatDistance, formatRelative, isDate } from \'date-fns\'; \\nimport { es, de, fr, ja, zhCN } from \'date-fns/locale\';\\n\\nconst date = new Date(\'2025-02-18T14:30:00Z\'); \\n\\n// Basic locale formatting \\nconst localeExamples = { \\n english: format(date, \'MMMM d, yyyy\', { locale: enUS }), \\n spanish: format(date, \'MMMM d, yyyy\', { locale: es }), \\n german: format(date, \'MMMM d, yyyy\', { locale: de }), \\n french: format(date, \'MMMM d, yyyy\', { locale: fr }), \\n japanese: format(date, \'MMMM d, yyyy\', { locale: ja }), \\n chinese: format(date, \'MMMM d, yyyy\', { locale: zhCN }) \\n};\\n\\nconsole.log(localeExamples); \\n Output: \\n { \\n english: \\"February 18, 2025\\", \\n spanish: \\"febrero 18, 2025\\", \\n german: \\"Februar 18, 2025\\", \\n french: \\"février 18, 2025\\", \\n japanese: \\"2月 18, 2025\\", \\n chinese: \\"二月 18, 2025\\" \\n }\\n\\n
If you need more customization or have edge cases to handle, check the documentation for additional techniques and examples.
\\ndate-fns-tz
extends date-fns
with robust timezone handling. It allows converting and formatting dates across time zones. Let’s explore its key features:
import { \\n format, \\n utcToZonedTime, \\n zonedTimeToUtc, \\n getTimezoneOffset \\n} from \'date-fns-tz\'; \\n\\nconst date = new Date(\'2025-02-18T14:30:00Z\');\\n\\n// Basic timezone conversion \\nconst timezoneExamples = { \\n newYork: utcToZonedTime(date, \'America/New_York\'), \\n tokyo: utcToZonedTime(date, \'Asia/Tokyo\'), \\n london: utcToZonedTime(date, \'Europe/London\'), \\n sydney: utcToZonedTime(date, \'Australia/Sydney\') \\n};\\n\\n// console.log(timezoneExamples)\\n\\n//{\\n// newYork: Tue Feb 18 2025 09:30:00 GMT-0500 (Eastern Standard Time),\\n// tokyo: Tue Feb 18 2025 23:30:00 GMT+0900 (Japan Standard Time),\\n// london: Tue Feb 18 2025 14:30:00 GMT+0000 (Greenwich Mean Time),\\n// sydney: Wed Feb 19 2025 01:30:00 GMT+1100 (Australian Eastern Daylight Time)\\n//}\\n\\n
Day.js has gained significant popularity as a modern, minimalist alternative to Moment.js. It was explicitly designed to address Moment’s shortcomings while maintaining a similar API, making it an excellent choice for migration projects.
\\nHere’s an example demonstrating how to set up Day.js with useful plugins for working with timezones, custom formats, and locales. We see how to create date objects from different input types, format them into readable strings, convert them to a specific timezone, and perform date math like adding or subtracting time—all while keeping the original dates immutable:
\\nimport dayjs from \'dayjs\';\\nimport utc from \'dayjs/plugin/utc\';\\nimport timezone from \'dayjs/plugin/timezone\';\\nimport localeData from \'dayjs/plugin/localeData\';\\nimport customParseFormat from \'dayjs/plugin/customParseFormat\';\\n\\n// Extend with plugins\\ndayjs.extend(utc);\\ndayjs.extend(timezone);\\ndayjs.extend(localeData);\\ndayjs.extend(customParseFormat);\\n\\n// Creating Day.js objects\\nconst today = dayjs();\\nconst specificDate = dayjs(\'2025-02-18\');\\nconst fromFormat = dayjs(\'18/02/2025\', \'DD/MM/YYYY\');\\n\\n// Formatting\\nspecificDate.format(\'YYYY-MM-DD\'); // \\"2025-02-18\\"\\nspecificDate.format(\'dddd, MMMM D, YYYY\'); // \\"Tuesday, February 18, 2025\\"\\n\\n// Timezone handling\\nspecificDate.tz(\'America/New_York\').format(\'YYYY-MM-DD HH:mm:ss Z\'); \\n// \\"2025-02-18 09:30:00 -05:00\\"\\n\\n// Manipulation (immutable - returns new instances)\\nconst nextWeek = specificDate.add(1, \'week\');\\nconst lastMonth = specificDate.subtract(1, \'month\');\\n\\n
Key advantages of Day.js include:
\\nThe plugin architecture of Day.js is really nice. You only pay the size cost for features you actually use:
\\n// Only import the plugins you need\\nimport relativeTime from \'dayjs/plugin/relativeTime\';\\nimport calendar from \'dayjs/plugin/calendar\';\\ndayjs.extend(relativeTime);\\ndayjs.extend(calendar);\\n\\n// Now you can use these features\\ndayjs(\'2025-02-18\').fromNow(); // \\"in X years\\" (depends on current date)\\ndayjs(\'2025-02-18\').calendar(); // \\"02/18/2025\\" or \\"Tuesday\\" based on how far in future\\n\\n
While Moment.js was once the go-to library for date handling in JavaScript, it is now considered legacy. The Moment.js team has officially declared the library in maintenance mode and recommends newer alternatives. That said, many existing projects still use it, so it’s worth understanding its approach.
\\n\\nLet’s look at a typical Moment.js workflow: creating date objects from strings or custom formats, formatting them into readable outputs, adjusting dates by adding or subtracting time, and converting them to specific time zones. It’s a practical example of how Moment was commonly used in real-world applications before more modern libraries took the lead:
\\nimport moment from \'moment\';\\nimport \'moment-timezone\';\\n\\n// Creating moments\\nconst now = moment(); // Current date/time\\nconst fromString = moment(\'2025-02-18T14:30:00Z\');\\nconst fromFormat = moment(\'18/02/2025\', \'DD/MM/YYYY\');\\n\\n// Formatting\\nfromString.format(\'YYYY-MM-DD\'); // \\"2025-02-18\\"\\nfromString.format(\'dddd, MMMM Do YYYY\'); // \\"Tuesday, February 18th 2025\\"\\nfromString.format(\'h:mm a\'); // \\"2:30 pm\\"\\n\\n// Operations (modifies the original moment)\\nfromString.add(7, \'days\');\\nfromString.subtract(2, \'months\');\\n\\n// Timezone handling\\nconst tokyoTime = fromString.clone().tz(\'Asia/Tokyo\').format(\'YYYY-MM-DD HH:mm:ss\');\\nconst nyTime = fromString.clone().tz(\'America/New_York\').format(\'YYYY-MM-DD HH:mm:ss\');\\n\\n
Moment’s main drawbacks include:
\\nThe ECMAScript Temporal proposal aims to replace the problematic Date API with a more comprehensive, immutable, and timezone-aware solution. While not yet standardized, it’s worth keeping an eye on as it represents the future of date handling in JavaScript.
\\nHere’s a snippet that showcases Temporal’s modern approach: creating immutable, timezone-aware dates and performing safe arithmetic:
\\n// This syntax is not yet available in browsers without polyfills\\n\\n// Creating a date (Temporal.PlainDate is timezone-independent)\\nconst date = Temporal.PlainDate.from({ year: 2025, month: 2, day: 18 });\\n\\n// Creating a specific time in a timezone\\nconst nyDateTime = Temporal.ZonedDateTime.from({\\n timeZone: \'America/New_York\',\\n year: 2025, month: 2, day: 18, hour: 9, minute: 30\\n});\\n\\n// Formatting\\ndate.toString(); // \\"2025-02-18\\"\\nnyDateTime.toString(); // \\"2025-02-18T09:30:00-05:00[America/New_York]\\"\\n\\n// Duration and arithmetic (returns new instances)\\nconst futureDate = date.add({ days: 7 });\\nconst duration = date.until(futureDate);\\n\\n
You can experiment with Temporal using the polyfill available at npmjs.com/package/@js-temporal/polyfill.
\\nThere are many options to choose from, so it’s understandable if making a decision is hard. Here’s a comparison table to help you make an informed decision:
\\nFeature | \\nNative \\n Date | \\ndate-fns | \\nDay.js | \\nMoment.js | \\n
---|---|---|---|---|
Bundle size | \\n0 KB | \\n13 KB | \\n2 KB | \\n67 KB | \\n
Immutability | \\nNo | \\nYes | \\nYes | \\nNo | \\n
Tree-shaking | \\nN/A | \\nExcellent | \\nGood | \\nPoor | \\n
Timezone support | \\nBasic | \\nVia date-fns-tz | \\nVia plugin | \\nVia plugin | \\n
Localization | \\nGood | \\nExcellent | \\nGood | \\nExcellent | \\n
TypeScript support | \\nBasic | \\nExcellent | \\nGood | \\nVia DefinitelyTyped | \\n
Learning curve | \\nModerate | \\nLow | \\nLow | \\nModerate | \\n
Active development | \\nSlow | \\nActive | \\nActive | \\nMaintenance only | \\n
Date
methods — Small projects, simple date displays, and when bundle size is criticalWhen selecting a date library, performance implications should factor into your decision, especially for date-heavy applications:
\\nEach library adds weight to your application:
\\nDate
— 0kb (built into JavaScript)Date
methods — Fastest but limited in functionalityDate
objects, which are memory-efficient and perform wellHere’s a simplified comparison:
\\n// Test with 100,000 operations\\nconst COUNT = 100000;\\n\\n// Native JS\\nconsole.time(\'Native\');\\nfor (let i = 0; i < COUNT; i++) {\\n new Date().toISOString();\\n}\\nconsole.timeEnd(\'Native\'); // Typically fastest\\n\\n// date-fns\\nconsole.time(\'date-fns\');\\nfor (let i = 0; i < COUNT; i++) {\\n format(new Date(), \'yyyy-MM-dd\\\\\'T\\\\\'HH:mm:ss.SSS\\\\\'Z\\\\\'\');\\n}\\nconsole.timeEnd(\'date-fns\'); // Close second\\n\\n// Day.js\\nconsole.time(\'Day.js\');\\nfor (let i = 0; i < COUNT; i++) {\\n dayjs().format(\'YYYY-MM-DDTHH:mm:ss.SSS[Z]\');\\n}\\nconsole.timeEnd(\'Day.js\'); // Usually faster than Moment\\n\\n// Moment.js\\nconsole.time(\'Moment\');\\nfor (let i = 0; i < COUNT; i++) {\\n moment().format(\'YYYY-MM-DDTHH:mm:ss.SSS[Z]\');\\n}\\nconsole.timeEnd(\'Moment\'); // Usually slowest\\n\\n
Date
objects with ==
or ===
(use .getTime()
instead)//Incorrect: Direct comparison\\nconst date1 = new Date(\'2025-02-18\');\\nconst date2 = new Date(\'2025-02-18\');\\nif (date1 === date2) { / This will never execute / }// Correct version: Compare timestamps\\nif (date1.getTime() === date2.getTime()) { /*This works / }\\n
JavaScript date formatting doesn’t have to be a headache. The key is choosing the right tool for your specific needs:
\\nDate
methods — Work well for simple use cases and offer good performanceWhen in doubt, start simple and only reach for more complex solutions when needed. Your bundle size and application performance will thank you!
\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nTypeScript adds static typing to JavaScript code, which helps reduce unpredictable behavior and bugs. In the past, TypeScript code couldn’t run directly in Node.js. The default way to run TypeScript in the runtime was to first compile it to JavaScript using the TypeScript CLI tool (tsc
) and then run the resulting code.
If you wanted to run TypeScript directly in Node.js, you had to rely on third-party tools like ts-node and tsx. However, with the v22.6 release of Node.js, the runtime added experimental lightweight support for TypeScript. Node.js implemented TypeScript support using a method known as type stripping, effectively turning Node.js into a TypeScript runner.
\\nBut why do we need TypeScript runners at all? Why wasn’t the original TypeScript compilation process enough? In this article, we’ll explore the benefits of running TypeScript directly in Node.js and compare different ways to do so.
\\nThe official TypesScript library does a lot of important work in a project. It checks the code for syntax errors, validates and infers types, parses the tsconfig.json
file for custom instructions, and transpiles TypeScript code to JavaScript — which a JavaScript runtime can now execute.
Technically, the TypeScript compiler should have been enough for developing TypeScript apps, as the most popular JavaScript runtimes — a browser or Node.js — did not understand TypeScript code anyway. But in practice, a TypeScript runner can be more efficient, especially during development, which is a feature neither Node.js nor the TypeScript library supported had in the past.
\\nA TypeScript runner is any program that directly executes TypeScript code. Other JavaScript runtimes like Bun and Deno have TypeScript runners built in. The runner allows developers to execute TypeScript code in one step instead of the default two steps of transpiling and then executing the resulting JavaScript code.
\\nTypeScript runners handle two development steps at once. Also, using them is usually faster than the default compilation process (during fast-paced development). This is because the runners are typically heavily optimized, which is especially helpful in larger projects. TypeScript runners are also less complicated to use when you just want to test or execute a script.
\\nSome runners come with extra features like a watch mode and a REPL, and some of them can even perform type checking.
\\nThe rest of this article will look into three ways of running TypeScript in Node.js:
\\nWe’ll list the features of each tool and discuss the trade-offs developers make when choosing between them.
\\nIn v22.6, Node.js released the --experimental-strip-types
CLI flag. When used, this flag transforms a TypeScript file to JavaScript (using type stripping) before executing it. Type stripping is a process where all the type declarations and annotation syntax in TypeScript code are removed (stripped). If TypeScript code contains only erasable syntax, type stripping automatically turns it into JavaScript code.
“Erasable syntax” in this context refers to TypeScript-specific syntax that doesn’t contain any values needed at runtime. Node.js handles type stripping using the npm module amaro. amaro is a lightweight and optimized tool that replaces erasable code in TypeScript with white space (similar to ts-blank-space). Replacing the erasable code with white space means there is no definite need to generate source maps since errors are detected at their original line of code.
\\nThe type stripping process in Node.js gives it very minimal support for TypeScript. Since its v23.6 release, Node.js now enables --experimental-strip-types
by default. This means you could run TypeScript files in the runtime without using any flag (as long as they have the same version or higher).
In the v22.7 release, Node.js added a new CLI flag: --experimental-transform-types
. When used, this flag enables type stripping, as well as transforms TypeScript code with non-erasable syntax to JavaScript. However, it needs to generate source maps and thus is not a very lightweight approach (compared to just type stripping).
Node.js can only remove erasable code — it cannot type check the code. Instead, it leaves that to the developer. To ensure fast-paced development, the IDE is usually sufficient for flagging syntax or type errors. You can implement type checking in the linting process before committing to the version control system or deploying to production.
\\nHere are some downsides to consider before using this new Node.js feature:
\\n--experimental-transform-types
flagtsconfig.json
file and does not use any custom instructions. This implies that the internal transpilation process is not configurable in any wayWhile the Node.js runtime doesn’t need a tsconfig.json
file, nor does it type check the code, Node.js still highly recommends type checking your TypeScript code before deployment. In order to complement this new type stripping feature, TypeScript released the config option and CLI flag named erasableSyntaxOnly
:
// tsconfig.json\\n{\\n \\"compilerOptions\\": {\\n // ...\\n erasableSyntaxOnly: true,\\n }\\n}\\n\\n
When enabled, this option (set to true
in the TypeScript configuration file or called in the CLI as --erasableSyntaxOnly
), makes sure the TypeScript compiler returns an error message when the source code contains syntax that cannot be type stripped. When a developer enables this option, they can be sure that whatever code the tsc
tool compiles successfully can also run on Node.js.
This new option is very important, especially for code editors that rely on the tsconfig.json
to know when to show errors and warnings.
This section offers a step-by-step guide of how to use Node.js type stripping in a project. But before these steps, make sure to update your code editor to its latest version for better editor type checking and support. Also, ensure the Node.js version is v22.6 or greater. Here are the steps:
\\nCreate a tsconfig.json
file. Even though Node.js does not work with the file, make sure to use the following minimum configuration. This closely mirrors how Node.js handles TypeScript files and ensures consistency regardless of the tools one uses to run the TypeScript code:
// tsconfig.json\\n{\\n // ...\\n \\"compilerOptions\\": {\\n \\"noEmit\\": true,\\n \\"target\\": \\"esnext\\",\\n \\"module\\": \\"nodenext\\",\\n \\"rewriteRelativeImportExtensions\\": true,\\n \\"erasableSyntaxOnly\\": true,\\n \\"verbatimModuleSyntax\\": true\\n }\\n}\\n
Make sure to install Node.js types for type safety when working with Node.js modules and APIs:
\\nnpm install --save-dev @types/node\\n
Next, write TypeScript code in a new file:
\\n// index.ts\\nfunction add(a: number, b: number) {\\n return a + b;\\n}\\n\\nconsole.log(add(2, 2));\\n
Run the code on the command line:
\\nnode index.ts # If using Node.js v23.6 or more\\nnode --experimental-strip-types index.ts # If using Node.js v22.6 or more\\n
This produces the outcome of 4
.
You can also combine running TypeScript code during development with type stripping with a script that compiles the code when needed. Perhaps when pushing code updates to the version control system or when deploying code. Using the above index.ts
file, add type checking by first installing the official TypeScript library:
npm install --save-dev typescript\\n\\n
Then configure the package.json
to run using Node.js watch mode for development and an npm script to type check with TypeScript:
// package.json\\n{\\n // ...\\n \\"scripts\\": {\\n \\"dev\\": \\"node --watch index.ts\\",\\n \\"typecheck\\": \\"tsc\\"\\n },\\n}\\n\\n
The best use case for Node.js type stripping is fast code execution during development. However, since the feature is still technically unstable at the time of writing, it’s not recommended for production use. Additionally, you can’t publish npm packages with TypeScript. So, after running type checks with tsc
, you’ll need to use a transpiler tool like swc or esbuild to instantly convert the code to JavaScript for execution.
ts-node is a third-party npm module used to run TypeScript code directly in Node.js. It is also a TypeScript REPL library for Node.js. ts-node has full TypeScript code, unlike Node.js type stripping, where full support has to be enabled. ts-node also type checks TypeScript code by default unless a developer opts out with a --transpileOnly
flag. The compilation process is also customizable with the tsconfig.json
file when using ts-node.
In addition to all of this, ts-node has a plugin ecosystem and supports third-party transpilers (like one that uses swc
). ts-node is suitable for precompiling code before it is deployed to production, but it can also run TypeScript code in production if a developer wishes to use it.
In order to use ts-node in a Node.js project, follow these steps:
\\nFirst, install ts-node as a dev dependency. Also, install typescript
as it is a peer dependency to ts-node:
npm install -D ts-node typescript\\n\\n
Then, assuming a TypeScript file index.ts
exists like in the previous section, execute it in Node.js as so
npx ts-node index.js # If module type is common js\\nnpx node --loader ts-node/esm index.ts # If using ESM module type\\n\\n
And that is all you need to run a file with ts-node
!
You can also set up a package.json
to have scripts that handle type checking and transpilation separately:
{\\n // ..\\n \\"type\\": \\"module\\",\\n \\"scripts\\": {\\n \\"dev\\": \\"node --watch --loader ts-node/esm index.ts\\",\\n \\"typecheck\\": \\"tsc\\",\\n },\\n}\\n\\n
In the tsconfig.json
, set up configurations for ts-node with options in the \\"ts-node\\"
property:
{\\n // ...\\n \\"ts-node\\": {\\n \\"transpileOnly\\": true\\n }\\n}\\n\\n
ts-node has a few configuration options and CLI flags, all detailed in its documentation.
\\nWith the package.json
file above, running npm run dev
during development executes the TypeScript code.
ts-node can type check as well as execute TypeScript code. There is usually little need to type check TypeScript code in the development phase because the code editors will highlight type errors. However, to ensure quality before deployment, it is important to type check the TypeScript code.
\\n\\nAfter type checking, one can use ts-node to run TypeScript code directly in production. But that adds a risky and avoidable overhead to just running JavaScript code directly. For example, when working with ES modules, you can’t use ts-node in production because it relies on loaders to transpile the TypeScript code (loaders are an experimental feature of Node.js)
\\nHowever, for CommonJS apps, make sure to enable the swc
plugin, and enable the transpileOnly
option. Finally, when type checking with tsc
, make sure to use the --noEmit
flag or set its equivalent in the tsconfig.json
file.
ts-node may offer many features, but there are a few downsides to using it. First, the code base is not well-maintained. At the time of writing, there have been no new minor or major releases for two years, even though its repository shows some highlighted “issues” that need to be attended to.
\\nSecond, ts-node is very advanced, therefore, it can be a lot more complicated to use for a beginner compared to an alternative like tsx
or Node.js type stripping.
Use ts-node for a faster development pace in transpile-only mode. That is the best use case for the package.
\\nThe npm package tsx (not to be confused with .tsx
, the file extension for JSX written withTypeScript) calls itself a “Node.js enhancement.” It is a third-party package that uses Node.js to execute TypeScript code just like ts-node. tsx tries to be intuitive and beginner-friendly. When installed, tsx is a node
alias; this means it accepts all Node.js CLI flags and arguments. The difference is that tsx is also capable of running TypeScript files.
Without a tsconfig.json
file in a project, tsx runs the provided TypeScript code with “modern and sensible defaults”. This means using a tsconfig.json
file with tsx is optional.
tsx comes with a robust watch mode feature. Like Node.js type stripping, tsx doesn’t support type checking. However, it comes with a TypeScript REPL like ts-node. In addition to all of this, because tsx uses esbuild
under the hood and is heavily optimized itself, it is guaranteed to transpile and execute TypeScript very quickly.
To use tsx, first install it as a dev dependency to a project:
\\nnpm install -D tsx\\n\\n
A tsconfig.json
file might not be compulsory with tsx, but the library recommends it. Create one with the following recommended configuration:
// tsconfig.json\\n\\n{\\n // ...\\n \\"compilerOptions\\": {\\n \\"moduleDetection\\": \\"force\\",\\n \\"module\\": \\"Preserve\\",\\n \\"resolveJsonModule\\": true,\\n \\"allowJs\\": true,\\n \\"esModuleInterop\\": true,\\n \\"isolatedModules\\": true,\\n }\\n}\\n\\n
After that, use tsx to execute a file like you would with the node
CLI command:
npx tsx index.js\\n\\n
Or set up scripts in the package.json
file:
// package.json\\n{\\n // ...\\n \\"scripts\\": {\\n \\"dev\\": \\"tsx index.ts\\"\\n }\\n}\\n\\n
tsx utilizes esbuild
for transpilation and is potentially a good choice for production. However, a developer choosing this route would still have to handle type checking separately. But, just like ts-node, you would get the best use out of tsx in production instead of introducing a production overhead.
tsx is a regularly updated library with fairly detailed documentation. A downside to using it is its installation bundle size, which was 23 MB at the time of writing (according to https://pkg-size.dev/tsx). But that is not a big reason not to use the library, just something to note.
\\nFeatures | \\nNode.js type stripping | \\nts-node | \\ntsx | \\n
---|---|---|---|
GitHub stars (at the time of writing) | \\nN/A | \\n13K Stars | \\n10K Stars | \\n
Bundle install size | \\nN/A | \\n27MB | \\n23MB | \\n
Requires CommonJS modules without .ts file extension | \\n❌ | \\n✅ | \\n✅ | \\n
Imports ES modules without .ts file extension | \\n❌ | \\n❌ | \\n✅ | \\n
Uses tsconfig paths | \\n❌ | \\n❌ | \\n✅ | \\n
TypeScript REPL | \\n❌ | \\n✅ | \\n✅ | \\n
Built-in watch mode | \\n✅ (Node.js watch mode) | \\n❌ | \\n✅ | \\n
Type checking | \\n❌ | \\n✅ | \\n❌ | \\n
Parses tsconfig.json | \\n❌ | \\n✅ | \\n✅ | \\n
Full TypeScript support | \\nExperimental with --experimental-transform-types | \\n✅ | \\n✅ | \\n
The aim of this article was to show different ways of running Typescript directly in Node.js. The article started by stating why the TypeScript compiler might not be enough for developing Typescript Apps. The article introduced the Node.js Type Stripping feature, how to use it, and some drawbacks it has.
\\nThe article then discussed ts-node and tsx, which are two third-party npm libraries that provide a TypeScript REPL and are used to execute TypeScript code in Node.js. The article showed how to use them and the possible downsides they have.
\\nThe addition of type stripping to Node.js guarantees that for many use cases, a developer may not need to install third-party Typescript runners anymore. With the experimental-transform-types
feature in development, it places Node.js on a path to fully run TypeScript applications like Deno or Bun sometime in the future.
Would you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nIf you’ve been working with Tailwind CSS for a while, you probably know it doesn’t follow the traditional approach when it comes to styling content-rich UIs, like blogs, documentation pages, or CMS-powered content. This can make using the library feel tedious and repetitive for these use cases.
\\nThat’s because Tailwind removes all default browser styling by design. This is beneficial for most use cases, since it saves you from having to reset and override generic user-agent styles. However, it can become problematic when you’re trying to style content you don’t control, such as rich text from a CMS or Markdown-generated content.
\\nDue to the importance of the style reset feature, the Tailwind team didn’t remove it. Instead, they introduced a Typography plugin called @tailwindcss/typography (also known as the prose plugin) that gives you beautiful, pre-styled typography out of the box.
\\nIn this article, I’ll walk you through how to set up and use the Tailwind Typography plugin in your project.
\\nThe Tailwind Typography plugin is a first-party plugin that transforms raw, unstyled HTML content into clean, well-formatted typography using a single utility prose
class.
It applies a well-balanced set of typographic defaults to the child elements of any container it’s applied to, therefore automatically styling elements like paragraphs, headings, lists, blockquotes, and tables to look polished and readable right out of the box.
\\nThink of the plugin as a pre-defined style for content you don’t directly control, such as blog posts, documentation pages, or Markdown-rendered content, without sacrificing Tailwind’s core utility-first styling approach or disabling its base styles.
\\nAs mentioned earlier, the best time to use the Tailwind Typography plugin is when you’re dealing with content you don’t have direct control over. But what exactly does that mean?
\\nIn this context, “content you don’t control” refers to text that either lives on a server or comes in formats that you can’t directly modify with utility classes, such as Markdown or CMS-generated HTML strings.
\\nFor example, if your page content is written directly in HTML and structured like this:
\\n<article>\\n <h2>The Rise of Artificial Intelligence in Everyday Life</h2>\\n <div>\\n ...\\n </div>\\n <p> \\n ...\\n </p>\\n <p>\\n ...\\n </p>\\n <blockquote>\\n ...\\n </blockquote>\\n <h3>Challenges and Opportunities</h3>\\n <p>\\n ...\\n </p>\\n <ul>\\n ...\\n </ul>\\n <pre>\\n \\n ...\\n
</pre> </article>
\\n…it will initially appear unstyled due to Tailwind stripping away the browser’s default styles:
\\nBut since you have direct access to the HTML, you can simply apply utility classes to each element and style them as needed.
\\nHowever, if the content is coming from a CMS or is in Markdown format, you won’t be able to target or modify the elements directly, as seen in this example:
\\nconst Blog = ({ postId }: { postId: number }) => {\\n const [blogPost, setBlogPost] = useState<BlogPost | null>(null);\\n\\n useEffect(() => {\\n // Data fetching logic\\n }, [postId]);\\n\\n return (\\n <article>\\n <h1>{blogPost.title}</h1>\\n ...\\n <div dangerouslySetInnerHTML={{ __html: blogPost.content ?? \\"\\" }} />\\n </article>\\n );\\n};\\n\\n
Styling the generated content using Tailwind CSS or even vanilla CSS will be challenging and time-consuming, as the content is dynamically generated from the CMS, and we do not have direct access to the corresponding HTML elements.
\\n\\nThat’s where the Tailwind Typography plugin comes in handy. Instead of digging through dev tools to analyze the CMS-generated HTML and identifying patterns post-render, you can simply install the @tailwindcss/typography
plugin.
Then slap the prose
class on the container, and the content will be styled automatically. No extra work needed!
To get started with the Tailwind Typography plugin, make sure Node.js is installed on your machine and Tailwind is properly set up in your project. Then run the following command to install the plugin:
\\nnpm install -D @tailwindcss/typography\\n\\n
After installing the package, you’ll need to configure it. How you do that depends on the version of Tailwind you’re using.
\\nFor Tailwind versions below v4, the plugin needs to be added via the tailwind.config.js
file as follows:
module.exports = {\\n plugins: [\\n require(\'@tailwindcss/typography\')\\n ]\\n}\\n\\n
In version 4, the tailwind.config.js
file is deprecated. Plugin configuration now happens in your global CSS file, usually App.css
or index.css
, depending on the framework. Adding a plugin is now as simple as importing it into one of these CSS files:
@import \\"tailwindcss\\";\\n@plugin \\"@tailwindcss/typography\\";\\n\\n
Like I mentioned earlier, using the plugin is straightforward. Simply add the prose
class to the wrapper element around your content, and it’ll be automatically formatted.
In the previous section, the wrapper was the article
element. We already saw how the content looks when rendered with Tailwind’s default styles. Now, let’s add the prose
class to that same article
element:
<article className=”class=\\"prose\\">\\n ...\\n</article>\\n\\n
Here’s what happens:
\\nThe content instantly looks more polished.
\\nTo really test this out, let’s look at content that’s dynamically rendered from the server:
\\nconst Blog = () => {\\n ...\\n\\n useEffect(() => {\\n const fetchBlogContent = async () => {\\n try {\\n const response = await fetch(\\"https://jsonfakery.com/blogs/random\\");\\n if (!response.ok) {\\n throw new Error(\\"Failed to fetch blog content\\");\\n }\\n const data = await response.json();\\n setBlogContent(data.main_content);\\n } catch (err: any) {\\n setError(err.message);\\n }\\n };\\n\\n fetchBlogContent();\\n }, []);\\n\\n return (\\n <article className=\\"prose prose-slate mx-auto my-0 max-w-2xl\\">\\n <div dangerouslySetInnerHTML={{ __html: blogContent ?? \\"\\" }} />\\n </article>\\n );\\n};\\n\\n
As expected, the result is consistent:
\\nOnce you add the prose
class to the wrapper element, it’ll get formatted automatically:
The prose
class pretty much takes care of formatting, responsiveness, and overall aesthetics of your content automatically. However, if you’d like more control over how your content is styled, Tailwind provides options for customization to suit your preferences.
While the Tailwind Typography plugin provides a sensible set of defaults for styling content, it also offers several ways to tailor the appearance to your specific design needs.
\\nSo far, we’ve used the prose
class and demonstrated how it works. However, this class is part of a larger set of prose
modifier classes that you can use to control the plugin’s behavior. One category of these modifiers is used to define typography colors based on the five gray scales that Tailwind provides by default:
prose-gray
— Gray (Default)prose-slate
— Slateprose-zinc
— Zincprose-neutral
— Neutralprose-stone
— StoneLet’s say your project uses the Stone shade of gray, which has a warmer tone. You can make your typography match this by adding the prose-stone class to your content wrapper.
\\nBut here’s the key part: you still need to include the base prose
class. The modifier on its own won’t work. So the full class would look like this:
<article className=\\"prose prose-stone\\">\\n <div dangerouslySetInnerHTML={{ __html: blogContent ?? \\"\\" }} />\\n </article>\\n\\n
This applies the stone-colored typography styles, while keeping everything else handled by the base prose class:
\\nIf you want to go beyond grayscale options, you can create a custom color theme using the theme.extend.typography
API in your Tailwind config file. This way, you can apply custom colors to specific typography elements like blockquotes, code blocks, headings, and lists.
This is especially useful if you want to align your font choices with your brand or project’s design. You can read more about how to add custom fonts in Tailwind CSS on the LogRocket blog.
\\nNow, you might be thinking: isn’t the config file deprecated in Tailwind v4? Technically, yes, but this is one of the few exceptions where using the config file is still valid.
\\nStart by creating a tailwind.config.js
file in your project’s root if you don’t already have one. Then add the @config
directive to your global CSS file.
NB. If you’re using an older version of Tailwind, you can skip this step:
\\n@import \\"tailwindcss\\";\\n@plugin \\"@tailwindcss/typography\\";\\n@config \\"./tailwind.config.js\\";\\n\\n
Next, open the config file and add a custom theme (let’s call it orange) inside the theme.extend.typography
section. In this custom theme, you’ll override the plugin’s default color variables with your own values under the css
key:
module.exports = {\\n theme: {\\n extend: {\\n typography: ({ theme }) => ({\\n orange: {\\n css: {\\n \\"--tw-prose-headings\\": theme(\\"colors.orange.900\\"),\\n \\"--tw-prose-quotes\\": theme(\\"colors.orange.900\\"),\\n \\"--tw-prose-quote-borders\\": theme(\\"colors.orange.300\\"),\\n },\\n },\\n }),\\n },\\n },\\n};\\n\\n
Here’s what’s happening:
\\n--tw-prose-*
variables are the internal values the Typography plugin usestheme()
helperYou can find a list of these variables in the plugin’s documentation or by inspecting the CSS generated by the default prose class in your browser’s developer tools.
\\nTo use your new theme in your markup, combine the base prose
class with your custom modifier: prose-{theme name}
(in this case, prose-orange
) like this:
<article className=\\"prose prose-orange\\">\\n <div dangerouslySetInnerHTML={{ __html: blogContent ?? \\"\\" }} />\\n </article>\\n\\n
Lastly, restart your dev server or rebuild your project to see the updates take effect.
\\nThe best part is that only the elements you’ve customized, such as headings and blockquotes in our example, will use the new orange color. Everything else will stay the same:
\\nSince the Tailwind Typography plugin handles most of the styling for you, a good edge case to think about is how it behaves when switching between color themes, like light and dark mode.
\\nIn some cases, you might notice that the text becomes hard to read or even disappears entirely when switching to dark mode. That’s because the default typography colors aren’t designed for dark backgrounds:
\\nTo fix this, the Tailwind Typography plugin provides a built-in modifier, dark:prose-invert
, that lets us invert the typography color based on the current color theme. This doesn’t just do a basic color inversion. Instead, it switches to a set of handcrafted typography colors made specifically for dark mode.
Just like other modifiers, you use it by adding it directly to your content wrapper. For example:
\\n<article className=\\"prose prose-stone dark:prose-invert\\">\\n <div dangerouslySetInnerHTML={{ __html: blogContent ?? \\"\\" }} />\\n </article>\\n\\n
Once that’s in place, your typography will automatically adapt based on the active theme, light or dark, without any extra configuration:
\\nWhile the Tailwind Typography plugin is great for making text look clean and readable with minimal effort, it does have a few quirks you should watch out for. Here are some common issues and tips to keep things running smoothly when using the prose class:
\\nTailwind has “preflight” base styles that are designed to smooth over cross-browser inconsistencies, but they can sometimes be too broad. If the typography plugin isn’t specific enough, these base styles might inadvertently override the typography plugin. For example, you might find that the default link color or font-weight isn’t what the plugin intends because a more general Tailwind style is taking precedence.
\\nIt might seem logical to wrap different sections of content in separate prose
containers. But nesting them can lead to conflicting styles. The outer prose
wrapper will cascade its styles down, which might override or clash with the inner one.
If you’re adding your own styles, as we did with the custom color theme, you need to be aware that the Typography plugin uses fairly specific selectors. If your styles don’t seem to be applying, it might be a specificity issue. Try making your selectors more specific or target parent elements as needed.
\\nMake sure the Markdown used in your content is consistent. For example, always use **
for bold and *
for italics instead of mixing it up with alternatives like __
or _
. Inconsistent syntax can lead to unexpected rendering.
If your content is coming from a CMS, see if it’s possible to configure the rich text editor to output consistent and clean Markdown. It’s not always possible, but it’s worth checking.
\\nIf you’d like to play around with the plugin, you can find the official demo on Tailwind play.
\\nTailwind prose can save you a lot of effort, especially when you’re dealing with content you don’t fully control, like Markdown or CMS-driven text. The prose class takes care of the messy styling bits so you can focus on the actual logic and functionality of your app. If it’s not already part of your setup, now’s a good time to start using it.
\\nHappy hacking!
\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nCustom fonts can transform the look and feel of a website, and Tailwind simplifies this with flexible font customization using local files and Google Fonts. These custom fonts can be used to improve readability, brand identity, and evoke certain emotions, all while maintaining Tailwind’s utility-first principles.
\\nCustom fonts are typically either self-hosted, meaning they reside on the same web server as your website, or served from an external provider like Google Fonts. In the case of self-hosted fonts, you can upload your font files to your server and reference them in your code. Alternatively, with external providers, you simply link to their hosted fonts.
\\nHowever, when working with Tailwind CSS, the process of using custom fonts, whether local or external, requires a slightly different setup.
\\nIn this article, I’ll walk you through how to use Google Fonts and locally installed fonts in your Tailwind projects to help you improve your project typography and design consistency. I’ll also show you how to integrate a custom Tailwind font family into your project.
\\nN.B., This article focuses on font family customizations in Tailwind CSS. If you’re looking to style rich text content like blog posts or documentation, check out our guide on the Tailwind Typography plugin.
\\nEditor’s note: This article was updated by David Omotayo in May 2025 to answer common FAQs around custom fonts in Tailwind CSS and add information about the default Tailwind font family.
\\nTailwind CSS is a utility-first CSS framework that allows you to build custom designs without ever leaving your HTML. It comes with a set of utility classes that you can use to style your elements without having to write any CSS code.
\\nTailwind is a great tool for building websites, but it doesn’t have any built-in support for web fonts like Google Fonts. If you want to use custom fonts in your project, you have to add them yourself. But don’t worry, it’s not as difficult as it sounds!
\\nIt would be best to have a small application to experiment with as we progress, so I’ve set up a starter project on GitHub.
\\nTo set up this Next.js and Tailwind project, follow the instructions in the README file. Once you clone this project, run the following command to install the required dependencies:
\\nnpm install\\n\\n
Then, start the dev server by running this command:
\\nnpm run dev\\n\\n
You should see the following:
\\nNow, let’s add some custom fonts to this project.
\\nTailwind CSS offers the flexibility to define and integrate custom font families into your project, which you can incorporate alongside existing font families. Let’s explore how to create and apply custom font families in Tailwind:
\\nThe Tailwind framework was built with customization in mind. By default, Tailwind
\\nsearches the root directory for a tailwind.config.js
file that contains all our customizations.
To create the tailwind.config.js
file, run the following command, which uses the npx
CLI:
npx tailwind init\\n\\n
Our starter project already contains a tailwind.config.js
file. Open the file and navigate to the theme section. Define your custom font families within the fontFamily
key:
// tailwind.config.js\\nmodule.exports = {\\n theme: {\\n extend: {\\n fontFamily: {\\n customFont: [\'\\"Custom Font\\"\', \\"sans-serif\\"],\\n // Add more custom font families as needed\\n },\\n },\\n },\\n // Other Tailwind configuration settings\\n};\\n\\n
Replace the custom font with the name of your desired font family. You can specify multiple fallback font families in case the primary one isn’t available.
\\nNow that we’ve defined our custom Tailwind font family, we can apply it to any element in our project. To do this, we need to add the font family to the element’s class attribute:
\\n<p className=\\"font-customFont\\">...</p>\\n\\n
Google Fonts are a great way to add custom fonts to your website. They are free, easy to use, and have a wide variety of fonts to choose from.
\\nTailwind recommends integrating Google Fonts through the Tailwind CSS Typography plugin for the best rendering performance. Alternatively, you can use Google Fonts with the following methods:
\\nnpm
packages such as the google-fonts plugin as a third-party solution@import
ruleOne way to add a custom font in Tailwind is by using the Typography plugin developed by the Tailwind team. This plugin offers an alternative approach to font customization that differs from the standard method. However, since it follows a different setup process, it’s beyond the scope of this article. You can check out our dedicated guide on the Tailwind Typography plugin to learn more about this method.
\\nAnother way to use Google Fonts in Tailwind CSS is to add the CDN link to your index.html
file, or in our case, the _document.js
file, as we are using Next.js:
// _document.js\\n<Head>\\n <link rel=\\"preconnect\\" href=\\"https://fonts.googleapis.com\\" />\\n <link rel=\\"preconnect\\" href=\\"https://fonts.gstatic.com\\" crossorigin />\\n <link\\n href=\\"https://fonts.googleapis.com/css2?family=Poppins:wght@400;500&display=swap\\"\\n rel=\\"stylesheet\\"\\n />\\n</Head>\\n\\n
Next, we must add the font family to the tailwind.config.js
file.
Let’s tell Tailwind to use the Poppins font that we added instead. Open up your tailwind.config.js
file and update the configuration to inherit and extend from fontFamily.sans
:
/** @type {import(\'tailwindcss\').Config} */\\nconst { fontFamily } = require(\\"tailwindcss/defaultTheme\\");\\nmodule.exports = {\\n content: [\\n \\"./pages/**/*.{js,ts,jsx,tsx}\\",\\n \\"./components/**/*.{js,ts,jsx,tsx}\\",\\n ],\\n theme: {\\n extend: {\\n fontFamily: {\\n poppins: [\\"Poppins\\", ...fontFamily.sans],\\n },\\n },\\n },\\n plugins: [],\\n};\\n\\n
We’re extending the default Tailwind configuration to add a new font-poppins
utility class. This means it will be available alongside Tailwind’s default font classes.
This approach aligns well with Tailwind’s utility-first styling philosophy. If we ever need to use the Poppins font elsewhere in the project, we can simply add the font-poppins
class to the element: no additional setup required. It also allows us to apply different fonts at various breakpoints if we want to.
Now, let’s use the font in our project. Open up the index.js
file and add the font-poppins
class to the section element:
// index.js\\n<section className=\\"font-poppins text-gray-600\\">\\n ...\\n</section>\\n\\n
Now, let’s see what we have:
\\nOur custom font is now applied to the text in our project.
\\nAnother way to integrate Google Fonts in Tailwind CSS is to use the @import
rule. This rule allows you to import a CSS file into another CSS file, which is useful when you want to use a CSS file that is not in the same directory or folder as the CSS file you are importing it into.
The CSS file can also be on a different server or even on a different domain. We will use it to import the Google Fonts CSS file into our Tailwind CSS file.
\\nTo use the @import
rule, we need to add the Google Fonts CSS file to our global.css
file:
/* global.css */\\n@tailwind base;\\n@tailwind components;\\n@tailwind utilities;\\n@import url(\\"https://fonts.googleapis.com/css2?family=Poppins:wght@400;500&display=swap\\");\\n\\n
To test this out, remove the CDN link from the _document.js
file.
Now, let’s see what we have:
\\nOur font is still applied to the text in our project.
\\n\\nWhenever I Google “How to use custom fonts in Tailwind,” the results say little about using local/downloaded fonts. However, 90 percent of the time, that’s precisely what I’m looking for. This section explains how to use locally installed fonts alongside Google Fonts, giving you the flexibility to style your Tailwind project.
\\nStarting with the v3 release, Tailwind CSS recommends adding local fonts via modern bundler workflows such as unplugin-fonts or postcss-font-magician to simplify font loading and improve the developer experience.
\\nTo begin, we need to find and install a font. In this example, we will use the Oswald font, a Google web font that was created by Vernon Adams, Kalapi Gajjar, and Cyreal.
\\nTo install Oswald, we need to download the font files from the Google Fonts website. Search for the font you want and click on the Download family button. This will download a zip file containing the font files:
\\nOnce the download is complete, unzip the file and copy the font files into the public/fonts
folder in your project. If you don’t have a fonts folder, create one.
Install the unplugin-fonts
package into your demo TailwindCSS project using the following command:
npm i unplugin-fonts\\n\\n
Next, register unplugin-fonts
as a plugin in your tailwind.config.js
file to use the Oswald local font:
const { fontFamily } = require(\\"tailwindcss/defaultTheme\\");\\nconst Unfonts = require(\\"unplugin-fonts\\");\\n// tailwind.config.js\\nmodule.exports = {\\n content: [\\n \\"./pages/**/*.{js,ts,jsx,tsx}\\",\\n \\"./components/**/*.{js,ts,jsx,tsx}\\",\\n ],\\n theme: {\\n fontSize: {\\n title: `2.6rem;`,\\n paragraph: `1.2rem;`,\\n },\\n fontFamily: {\\n poppins: [\\"Poppins\\", ...fontFamily.sans],\\n adelia: [\\"ADELIA\\", \\"cursive\\"],\\n },\\n },\\n plugins: [\\n require(\\"@tailwindcss/typography\\"),\\n Unfonts.default.vite({\\n custom: {\\n families: [\\n {\\n name: \\"Oswald\\",\\n local: \\"Oswald\\",\\n src: \\"./public/fonts/Oswald-VariableFont_wght.ttf\\",\\n },\\n ],\\n },\\n }),\\n ],\\n};\\n\\n
An alternative way to add local fonts with Tailwind CSS is through the @font-face
directive in CSS files.
We need to add the font files to the global.css
file:
@tailwind base;\\n@tailwind components;\\n@tailwind utilities;\\n@import url(\\"https://fonts.googleapis.com/css2?family=Poppins&display=swap\\");\\n@font-face {\\n font-family: \\"Oswald\\";\\n src: url(\\"../public/fonts/Oswald-VariableFont_wght.ttf\\");\\n}\\n\\n
Then, add the font family to the tailwind.config.js
file:
const { fontFamily } = require(\\"tailwindcss/defaultTheme\\");\\n// tailwind.config.js\\nmodule.exports = {\\n ...\\n theme: {\\n fontFamily: {\\n poppins: [\\"Poppins\\", ...fontFamily.sans],\\n oswald: [\\"Oswald\\", ...fontFamily.sans],\\n },\\n },\\n plugins: [],\\n};\\n\\n
Now, let’s use the font in our project. Open up the index.js file and add the font-oswald
class to the h1
element:
// index.js\\n<h1 className=\\"font-oswald text-primary-800 mb-4 text-4xl font-medium\\">\\n Microdosing synth tattooed vexillologist\\n</h1>\\n\\n
Now, let’s see what we have:
\\nAnd that’s it! We are now using a locally installed font in our project.
\\nWhile Tailwind offers many options by default, it also enables you to extend the default configuration by adding your own classes or changing the properties of the default configuration. To do this, we use the tailwind.config.js
file.
We can extend the color and font size configurations by updating the tailwind.config.js
file, like so:
const { fontFamily } = require(\\"tailwindcss/defaultTheme\\");\\n// tailwind.config.js\\nmodule.exports = {\\n content: [\\n \\"./pages/**/*.{js,ts,jsx,tsx}\\",\\n \\"./components/**/*.{js,ts,jsx,tsx}\\",\\n ],\\n theme: {\\n extend: {\\n colors: {\\n primary: {\\n 500: \\"#FF6363;\\",\\n 800: \\"#FF1313;\\",\\n },\\n },\\n },\\n fontSize: {\\n title: `2.6rem;`,\\n paragraph: `1.2rem;`,\\n },\\n fontFamily: {\\n poppins: [\\"Poppins\\", ...fontFamily.sans],\\n oswald: [\\"Oswald\\", ...fontFamily.sans],\\n },\\n },\\n plugins: [],\\n};\\n\\n
In the code above, we added a fontSize
property to the theme object. This fontSize
property contains our custom font sizes: title
and paragraph
. We added a new colors property that contains a primary color with two shades: 500
and 800
.
We can apply these classes to style our component like so:
\\nexport default function Home() {\\n return (\\n <section className=\\"font-poppins text-gray-600\\">\\n <div className=\\"container flex flex-col items-center justify-center px-5 py-24 mx-auto\\">\\n <div className=\\"lg:w-2/3 w-full text-center\\">\\n <h1 className=\\"font-oswald text-primary-800 mb-4 text-4xl font-medium\\">\\n Microdosing synth tattooed vexillologist\\n </h1>\\n <p className=\\"text-primary-500 mb-8 leading-relaxed\\">\\n Meggings kinfolk echo park stumptown DIY, kale chips beard jianbing\\n tousled. Chambray dreamcatcher trust fund, kitsch vice godard\\n disrupt ramps hexagon mustache umami snackwave tilde chillwave ugh.\\n Pour-over meditation PBR&B pickled ennui celiac mlkshk freegan\\n photo booth af fingerstache pitchfork.\\n </p>\\n <p class=\\"text-sm sm:text-base md:text-lg lg:text-xl xl:text-2xl\\">\\n Dynamic Font Sizing\\n </p>\\n </div>\\n </div>\\n </section>\\n );\\n}\\n\\n
Now, if you check your browser, you will see that our project has a new look with the styles applied:
\\nPerformance is of the utmost importance. It ensures a great user experience, and we all generally like it when websites are fast. As a result, you might not want to ship any assets that you’re not using in production.
\\nWhen that is the case and you want to get rid of any default configs in your project before shipping, all you have to do is update your Tailwind config by removing the configs you are not using:
\\nconst { fontFamily } = require(\\"tailwindcss/defaultTheme\\");\\n// tailwind.config.js\\nmodule.exports = {\\n content: [\\n \\"./pages/**/*.{js,ts,jsx,tsx}\\",\\n \\"./components/**/*.{js,ts,jsx,tsx}\\",\\n ],\\n theme: {\\n fontFamily: {\\n poppins: [\\"Poppins\\", ...fontFamily.sans],\\n oswald: [\\"Oswald\\", ...fontFamily.sans],\\n },\\n },\\n plugins: [],\\n};\\n\\n
The difference is that we omitted the extend: {}
object within the theme: {}
object and directly specified values for fontFamily
. This will ultimately override all default fonts and use only the ones we’ve specified.
Starting from the v3 release, the JIT (Just-In-Time) engine for Tailwind CSS automatically removes unused CSS within production builds, without needing a purge configuration in the tailwind.config.js
file.
If you have a legacy project using older versions of Tailwind CSS, the deprecated purge feature allows you to discard all unused CSS in production builds. You enable it by adding the purge key to the config of your old project, like so:
\\nconst { fontFamily } = require(\\"tailwindcss/defaultTheme\\");\\n// tailwind.config.js\\nmodule.exports = {\\n content: [\\n \\"./pages/**/*.{js,ts,jsx,tsx}\\",\\n \\"./components/**/*.{js,ts,jsx,tsx}\\",\\n ],\\n purge: {\\n enabled: true,\\n content: [\\n \\"./pages/**/*.{js,ts,jsx,tsx}\\",\\n \\"./components/**/*.{js,ts,jsx,tsx}\\",\\n ],\\n },\\n theme: {\\n fontFamily: {\\n poppins: [\\"Poppins\\", ...fontFamily.sans],\\n oswald: [\\"Oswald\\", ...fontFamily.sans],\\n },\\n },\\n plugins: [],\\n};\\n\\n
Starting with Tailwind v4, the tailwind.config.js
file is no longer automatically generated during installation. While Tailwind hasn’t entirely dropped support for the config file, you can still create and use it manually. They’ve made it clear in the documentation that going forward, the preferred approach is the CSS-first configuration method.
This method essentially lets you define all of Tailwind CSS’s configuration directly within your project’s global CSS file. So, in the case of custom fonts, rather than configuring the fontFamily
key inside the theme section of tailwind.config.js
like this:
const { fontFamily } = require(\\"tailwindcss/defaultTheme\\");\\n// tailwind.config.js\\nmodule.exports = {\\n ...\\n theme: {\\n fontFamily: {\\n poppins: [\\"Poppins\\", ...fontFamily.sans],\\n oswald: [\\"Oswald\\", ...fontFamily.sans],\\n },\\n },\\n};\\n\\n
Instead, you’ll use the @theme
directive in your global CSS file and define your custom font using the --font-*
theme variables:
@theme {\\n --font-oswald: \\"Oswald\\", \\"sans-serif\\"; \\n}\\n\\n
Once set, you can apply the custom font just like any other Tailwind utility class, by using the font’s variable name. In this case, font-oswald
:
<div class=\\"font-oswald\\">\\n <!-- ... --\x3e\\n</div>\\n\\n
The process is similar when using Google Fonts, but there’s one important detail to keep in mind: the @import
rule for the font must appear at the very top of your CSS file to ensure it’s properly loaded by the browser:
@import url(\'https://fonts.googleapis.com/css2?family=Poppins:wght@400;500&display=swap\');\\n\\n@theme {\\n --font-roboto: \\"Poppins\\", sans-serif; \\n}\\n\\n
Responsive, or dynamic, font sizing is a technique that allows you to change the size of your UI elements based on the size of the screen. This is useful when you want to ensure your UI elements are readable on all devices, regardless of their screen size.
\\nTailwind provides responsive text classes that allow you to set different font sizes based on the screen size. To implement dynamic font sizing in Tailwind CSS, you can leverage Tailwind’s utility classes along with responsive design principles. Here’s how you can achieve dynamic font sizing:
\\n<p class=\\"text-sm sm:text-base md:text-lg lg:text-xl xl:text-2xl\\">\\n Dynamic Font Sizing Using Responsive Text Classes\\n</p>\\n\\n
In the example above, we are using
\\ntext-sm
class — To set the font size to 12px
on the smallest screenssm:text-base
class — To set the font size to 16px
on small screensmd:text-lg
class — To set the font size to 18px
on medium screenslg:text-xl
class — To set the font size to 20px
on large screensxl:text-2xl
class — To set the font size to 24px
on extra-large screensAn alternative method for implementing dynamic font sizing with Tailwind fonts is using clamp arbitrary values. While introducing a clamp utility class is still being discussed, you can use a clamp arbitrary value to make your text responsive without having to add multiple breakpoint classes:
\\n<p class=\\"text-[clamp(1rem, 2.5vw, 2rem)]\\"> Dynamic Font Sizing with Clamp </p>\\n\\n
Adding custom fonts to Tailwind isn’t always smooth sailing. This section addresses common questions like “Why isn’t my custom font showing up?” and outlines fixes for common issues.
\\nWhen your custom font doesn’t display as expected, the browser is likely falling back to default system fonts. This is a built-in fallback behavior in Tailwind. If a custom font fails to load, Tailwind ensures that a default font is used to prevent the browser from rendering the text content using a generic font.
\\nMake sure your custom font is being properly imported. If you’re using Google Fonts, confirm that the @import
statement or <link>
tag is correctly included, either in your global CSS or your tailwind.config.js
(if you’re managing styles that way). If you’re using @font-face
with local fonts, double-check the file paths and the font-family
name for accuracy.
Verify that the font is included in the production build. During production builds, asset files can be moved or excluded. Make sure your font files are in web-friendly formats like .woff
or .woff2
, and place them in a publicly accessible location, typically the /public
folder or a designated assets/
directory, depending on your framework.
When using @font-face
, reference your fonts with relative paths that match their final build location. For example:
@font-face {\\n font-family: \'MyFont\';\\n src: url(\'/fonts/my-font.woff2\') format(\'woff2\');\\n font-weight: normal;\\n font-style: normal;\\n}\\n\\n
Before we wrap up, let’s answer some common questions developers often ask about using custom fonts and font families in Tailwind CSS:
\\nTailwind provides three default font family utilities: font-sans
, font-serif
, and font-mono
. These classes use a stack of system fonts for their respective typefaces, sans-serif
, serif
, and monospace
.
Here’s an example of their respective font stacks:
\\nsans-serif
— system-ui, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji.serif
— Georgia, Cambria, Times New Roman, Times, serifmonospace
— SFMono-Regular, Menlo, Monaco, Consolas, Liberation Mono, Courier New, monospaceSo, when using any of these classes, you’re actually relying on their full font stack listed above.
\\nYou can change the font in Tailwind CSS using either of the following methods:
\\ntheme.fontFamily
section in your tailwind.config.js
file@font-face
rule, assigning it a name, and then referencing that name in your Tailwind configuration, exactly as we’ve demonstrated throughout this articleYes. You can use Google Fonts with Tailwind. In fact, this article walks you through how to import and integrate Google Fonts into your Tailwind project.
\\nIn this article, we explored how to use Google Fonts in CSS and integrate them into your Tailwind CSS projects. From performing a Google font import in CSS to configuring local fonts, we’ve covered all the essentials for styling your frontend effectively. Use these methods to improve your font management and your Tailwind designs.
\\nWhat are your go-to fonts for web projects? Share them with us in the comments!
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nastro:db
)\\n registerUser
server actions\\n getProducts
server action\\n ProductList
component\\n ProductCard
component\\n getProductBySlug
server action\\n updateProduct
server action\\n Authentication and authorization concepts like JWTs, RBAC, and client-side components are common across frameworks like Vue, React, and Svelte. But Astro’s islands architecture presents unique challenges for authentication because it needs to be carefully handled between static content and interactive client-side components. Unlike Next.js or Nuxt, Astro doesn’t inherently handle API routes, requiring a different approach to authentication. Astro requires manual handling of protected routes using middleware.
\\nThis tutorial will specifically show how to integrate authentication in a partially static, partially dynamic framework, leveraging Astro’s server-side rendering (SSR) support and protecting static and dynamic routes in Astro. We’ll also explore using Astro’s new SSR capabilities to store sessions, refresh tokens, and manage user state efficiently.
\\nWe will build a vehicle rental app with JWT authentication and Role-Based Access Control with Astro. This is what the final application will look like:
\\nThis app’s features include user authentication (registration, login, logout), product/vehicle listings, detailed product views, and an administrative dashboard for editing product details.
\\nHere is the GitHub repository for the final build.
\\nRun the following command in your terminal to scaffold an Astro project:
\\nnpm create astro@latest\\n\\n
Then, choose the basic template.
\\nNext, update the project’s package.json
file with the following dependencies:
{\\n \\"dependencies\\": {\\n \\"@astrojs/db\\": \\"^0.14.11\\",\\n \\"@astrojs/netlify\\": \\"^6.2.6\\",\\n \\"@astrojs/react\\": \\"^4.2.4\\",\\n \\"@auth/core\\": \\"^0.37.4\\",\\n \\"@tailwindcss/vite\\": \\"^4.1.4\\",\\n \\"@types/bcryptjs\\": \\"^2.4.6\\",\\n \\"@types/react\\": \\"^19.1.2\\",\\n \\"@types/react-dom\\": \\"^19.1.2\\",\\n \\"astro\\": \\"^5.6.2\\",\\n \\"auth-astro\\": \\"^4.2.0\\",\\n \\"bcryptjs\\": \\"^3.0.2\\",\\n \\"react\\": \\"^19.1.0\\",\\n \\"react-dom\\": \\"^19.1.0\\",\\n \\"tailwindcss\\": \\"^4.1.4\\",\\n \\"uuid\\": \\"^11.1.0\\"\\n }\\n}\\n\\n
@astrojs/tailwind
: For utility-first CSS styling@astrojs/db
: A local-first ORM/ SQL database layer for database interactions, defining schema, and seeding dataauth-astro
: For handling user authentication, integrating between Astro and @auth/core
@astrojs/react
: To enable the use of React components within the Astro application@astrojs/netlify
: Integration for deploying Astro projects on Netlifybcryptjs
: For hashing and comparing passwords securelyjs-cookie
: A utility for managing cookies in the browser (storing tokens, sessions)uuid
: Used to generate universally unique IDsastro:db
)During development, Astro uses your database configuration to automatically generate local TypeScript types and autocompletion based on your defined schemas each time the dev server is started. We’ll configure and use Astro DB for the app database. Let’s begin by defining the database tables and their relationships.
\\nCreate a db/config.ts
file at the root of your project where you will define a schema for the database tables and their relationships. Then add the following:
import { column, defineDb, defineTable } from \\"astro:db\\";\\nconst User = defineTable({\\n columns: {\\n id: column.text({ primaryKey: true, unique: true }),\\n name: column.text(),\\n email: column.text(),\\n password: column.text(),\\n createdAt: column.date({ default: new Date() }),\\n role: column.text({ references: () => Role.columns.id }),\\n },\\n});\\nconst Role = defineTable({\\n columns: {\\n id: column.text({ primaryKey: true }),\\n name: column.text(),\\n },\\n});\\nconst Product = defineTable({\\n columns: {\\n id: column.text({ primaryKey: true }),\\n description: column.text(),\\n price: column.number(),\\n brand: column.text(),\\n slug: column.text({ unique: true }),\\n stock: column.number(),\\n tags: column.text(),\\n name: column.text(),\\n type: column.text(),\\n user: column.text({ references: () => User.columns.id }),\\n },\\n});\\nconst ProductImage = defineTable({\\n columns: {\\n id: column.text({ primaryKey: true }),\\n productId: column.text({ references: () => Product.columns.id }),\\n image: column.text(),\\n },\\n});\\nexport default defineDb({\\n tables: {\\n User,\\n Role,\\n Product,\\n ProductImage,\\n },\\n});\\n\\n
This defines a schema for our database tables and relationships using Astro DB. It’s similar to how ORMs like Prisma or Sequelize work. Each User
has one Role
, each Product
belongs to one User
, and each ProductImage
belongs to one Product. Also, a Product can have multiple associated ProductImages
, forming a one-to-many relationship.
Next, update astro.config.mjs
as follows:
import { defineConfig } from \'astro/config\';\\nimport db from \'@astrojs/db\';\\n\\nexport default defineConfig({\\n integrations: [db()],\\n});\\n\\n
To seed the database with initial data, create a seed-data.ts
file in the db
folder with the following:
interface SeedVehicle {\\n description: string;\\n images: string[];\\n stock: number;\\n price: number;\\n brand: string;\\n slug: string;\\n name: string;\\n type: VehicleTypes;\\n tags: string[];\\n}\\ntype VehicleTypes = \'COUPE\' | \'SEDAN\' | \'SPORTS CAR\' | \'CONVERTIBLE\' | \'TRUCK\' | \'STATION WAGON\';\\nexport const seedVehicles: SeedVehicle[] = [\\n {\\n description:\\n \'Sleek burgundy luxury car with multi-spoke rims in a minimalist beige and brown indoor setting, exuding elegance and modern design.\',\\n images: [\'burgundy_1.jpeg\', \'burgundy_2.jpeg\'],\\n stock: 7,\\n price: 750,\\n brand: \'Tesla\',\\n slug: \'luxury_burgundy_car\',\\n name: \'Luxury Burgundy Car\',\\n type: \'COUPE\',\\n tags: [\'sleek vehicle\', \'luxury car\', \'modern design\']\\n },\\n {\\n description:\\n \'Sleek black SUV with futuristic design parked in front of a modern building with warm lighting and glass panels.\',\\n images: [\'luxury_suv_1.jpeg\', \'luxury_suv_2.jpeg\'],\\n stock: 3,\\n price: 900,\\n brand: \'Tesla\',\\n slug: \'range_rover_luxury_suv\',\\n name: \'Range Rover Luxury SUV\',\\n type: \'COUPE\',\\n tags: [\'SUV\', \'luxury car\', \'modern design\']\\n },\\n {\\n description:\\n \'Front view of a vibrant orange sports car with sharp LED headlights, bold grille, and dramatic lighting in a dark setting.\',\\n images: [\'nissan_sport_1.jpeg\', \'nissan_sport_2.jpeg\'],\\n stock: 6,\\n price: 1200,\\n brand: \'Nissan\',\\n slug: \'nissan_sport_car\',\\n name: \'Nissan Sport Car\',\\n type: \'SPORTS CAR\',\\n tags: [\'aerodynamics\', \'sports\', \'speed\']\\n },\\n]\\n\\n
This interface describes the shape of a single vehicle object used for seeding. The VehicleTypes
union type defines a limited set of allowed vehicle types.
Download the image files from the final project’s GitHub repo.
\\nNext, create a seed.ts
file in the db
folder with the following:
import { db, Role, User, Product, ProductImage } from \\"astro:db\\";\\nimport { v4 as UUID } from \\"uuid\\";\\nimport bcrypt from \\"bcryptjs\\";\\nimport { seedVehicles } from \\"./seed-data\\";\\n// https://astro.build/db/seed\\nexport default async function seed() {\\n const roles = [\\n { id: \\"admin\\", name: \\"Administrator\\" },\\n { id: \\"user\\", name: \\"User\\" },\\n ];\\n const paulPlay = {\\n id: \\"001-002-PAUL\\",\\n name: \\"Paul Play\\",\\n email: \\"[email protected]\\",\\n password: bcrypt.hashSync(\\"password\\"),\\n role: \\"admin\\",\\n };\\n const peterParker = {\\n id: \\"001-002-PETER\\",\\n name: \\"Peter Parker\\",\\n email: \\"[email protected]\\",\\n password: bcrypt.hashSync(\\"password\\"),\\n role: \\"user\\",\\n };\\n await db.insert(Role).values(roles);\\n await db.insert(User).values([paulPlay, peterParker]);\\n const queries: any = [];\\n seedVehicles.forEach((p) => {\\n const product = {\\n id: UUID(),\\n description: p.description,\\n price: p.price,\\n brand: p.brand,\\n slug: p.slug,\\n stock: p.stock,\\n tags: p.tags.join(\\",\\"),\\n name: p.name,\\n type: p.type,\\n user: paulPlay.id,\\n };\\n queries.push(db.insert(Product).values(product));\\n p.images.forEach((img) => {\\n const image = {\\n id: UUID(),\\n image: img,\\n productId: product.id,\\n };\\n queries.push(db.insert(ProductImage).values(image));\\n });\\n });\\n db.batch(queries);\\n}\\n\\n
This populates the database with the initial data. It adds user and admin roles to the Role
table, adds sample users to the User
table, uses bcryptjs
to hash initial user passwords, and uuid
to generate unique IDs for products and images. It iterates through seedVehicles
from db/seed-data.ts
to create a Product
and an associated ProductImage
and uses db.batch()
for efficient insertion of multiple product/image records.
To enable SSR in the Astro project, add the following to astro.config.mjs
:
import { defineConfig } from \'astro/config\';\\nimport netlify from \\"@astrojs/netlify\\";\\nexport default defineConfig({\\n output: \\"server\\", \\n adapter: netlify(),\\n});\\n\\n
The netlify
adapter allows the server to render any page on demand when a route is visited.
To use React and Tailwind in the Astro project, add the following to astro.config.mjs
:
import { defineConfig } from \'astro/config\';\\nimport react from \\"@astrojs/react\\";\\nimport tailwindcss from \\"@tailwindcss/vite\\";\\nexport default defineConfig({\\n integrations: [react()],\\n output: \\"server\\", \\n vite: {\\n plugins: [tailwindcss()]\\n }\\n});\\n\\n
Next, update the tsconfig.json
file as follows:
{\\n \\"extends\\": \\"astro/tsconfigs/strict\\",\\n \\"compilerOptions\\": {\\n \\"baseUrl\\": \\".\\",\\n \\"paths\\": {\\n \\"@/*\\": [\\n \\"src/*\\"\\n ]\\n },\\n \\"jsx\\": \\"react-jsx\\",\\n \\"jsxImportSource\\": \\"react\\"\\n }\\n}\\n\\n
This config enables strict TypeScript settings with React JSX support and a cleaner import alias for the src
directory.
Next, create styles/global.css
in the asset folder and add the following:
@import \\"tailwindcss\\";\\n\\n
Astro supports creating components with Svelte, Vue, React, SolidJS, and Preact. It’s also framework agnostic, meaning developers can choose and combine different frameworks and libraries for their projects.
\\nCreate shared/Navbar.astro
in the components
folder and add the following:
---\\nconst { isLoggedIn, isAdmin, user } = Astro.locals;\\n---\\n<!-- component --\x3e\\n<nav\\n class=\\"flex justify-between px-20 py-10 items-center fixed top-0 w-full z-10 h-20\\"\\n style=\\"background-color: #000000;\\"\\n>\\n <h1 class=\\"text-xl text-white font-bold\\">\\n <a href=\\"/\\">AutoRentals</a>\\n </h1>\\n <div class=\\"flex items-center\\">\\n <ul class=\\"flex items-center space-x-6\\">\\n <li class=\\"font-semibold text-white\\">\\n <p>{user && user.email}</p>\\n </li>\\n {\\n isAdmin && (\\n <li class=\\"font-semibold text-white\\">\\n <a href=\\"/admin/dashboard\\">Dashboard</a>\\n </li>\\n )\\n }\\n {\\n !isLoggedIn ? (\\n <li class=\\"font-semibold text-white\\">\\n <a href=\\"/login\\">Login</a>\\n </li>\\n ) : (\\n <li id=\\"logout\\" class=\\"font-semibold cursor-pointer text-white\\">\\n <a>Log out</a>\\n </li>\\n )\\n }\\n </ul>\\n </div>\\n</nav>\\n<script>\\n const { signOut } = await import(\\"auth-astro/client\\");\\n const logoutElem = document.querySelector(\\"#logout\\") as HTMLLIElement;\\n logoutElem?.addEventListener(\\"click\\", async () => {\\n await signOut();\\n window.location.href = \\"/\\";\\n });\\n</script>\\n\\n
The Navbar
component displays the logged-in user’s email, shows an admin dashboard link if the user has admin privileges, and toggles between “Login” and “Log out” links depending on whether the user is authenticated. The logout button triggers a signOut()
function from auth-astro/client
and redirects the user to the homepage.
Layouts are Astro components that provide a reusable UI structure for sharing UI elements like navigation bars, menus, and footers across multiple pages.
\\nCreate MainLayout.astro
in the layouts
folder and add the following:
---\\nimport Navbar from \\"@/components/shared/Navbar.astro\\";\\nimport \\"@/assets/styles/global.css\\";\\nimport { ClientRouter } from \\"astro:transitions\\";\\ninterface Props {\\n title?: string;\\n description?: string;\\n image?: string;\\n}\\nconst {\\n title = \\"AutoRentals\\",\\n description = \\"One stop shop for all your vehicle rentals\\",\\n image = \\"/vehicles/images/no-image.png\\",\\n} = Astro.props;\\n---\\n<html lang=\\"en\\">\\n <head>\\n <meta charset=\\"utf-8\\" />\\n <link rel=\\"icon\\" type=\\"image/svg+xml\\" href=\\"/favicon.svg\\" />\\n <meta name=\\"viewport\\" content=\\"width=device-width\\" />\\n <meta name=\\"generator\\" content={Astro.generator} />\\n <title>{title}</title>\\n <!-- Meta tags --\x3e\\n <meta name=\\"title\\" content={title} />\\n <meta name=\\"description\\" content={description} />\\n <!-- Open Graph / Facebook --\x3e\\n <meta property=\\"og:title\\" content={title} />\\n <meta property=\\"og:url\\" content={Astro.url} />\\n <meta property=\\"og:description\\" content={description} />\\n <meta property=\\"og:type\\" content=\\"website\\" />\\n <meta property=\\"og:image\\" content={image} />\\n <!-- Twitter --\x3e\\n <meta property=\\"twitter:card\\" content=\\"summary_large_image\\" />\\n <meta property=\\"twitter:url\\" content={Astro.url} />\\n <meta property=\\"twitter:title\\" content={title} />\\n <meta property=\\"twitter:description\\" content={description} />\\n <meta property=\\"twitter:image\\" content={image} />\\n <ClientRouter />\\n </head>\\n <body>\\n <Navbar />\\n <main class=\\"container m-auto max-w-5xl px-5 pt-24 pb-10\\">\\n <slot />\\n </main>\\n </body>\\n</html>\\n\\n
The MainLayout
component accepts optional title
, description
, and image
props for setting dynamic SEO and social media meta tags, providing better discoverability and sharing. The <ClientRouter />
enables smooth page transitions with astro:transitions
.
Next, create AuthLayout.astro
in the layouts
folder and add the following:
---\\n---\\n<html lang=\\"en\\">\\n <head>\\n <meta charset=\\"utf-8\\" />\\n <link rel=\\"icon\\" type=\\"image/svg+xml\\" href=\\"/favicon.svg\\" />\\n <meta name=\\"viewport\\" content=\\"width=device-width\\" />\\n <meta name=\\"generator\\" content={Astro.generator} />\\n <title>Auth</title>\\n </head>\\n <body>\\n <link\\n href=\\"https://unpkg.com/tailwindcss@^2/dist/tailwind.min.css\\"\\n rel=\\"stylesheet\\"\\n />\\n <div\\n class=\\" relative\\"\\n >\\n <div\\n class=\\"absolute bg-gradient-to-b from-black to-black opacity-75 inset-0 z-0\\"\\n >\\n </div>\\n <div class=\\"min-h-screen sm:flex sm:flex-row mx-0 justify-center\\">\\n <slot />\\n </div>\\n </div>\\n </body>\\n</html>\\n\\n
The AuthLayout
component wraps the page content in a slot
, which will be shared between the login and registration pages.
Create a interface/product-with-images.interface.ts
file in the src
folder and add the following:
export interface ProductWithImages {\\n id: string;\\n description: string;\\n images: string;\\n price: number;\\n brand: string;\\n slug: string;\\n stock: number;\\n tags: string;\\n name: string;\\n type: string;\\n user: string;\\n}\\n\\n
To display prices with corresponding currencies, we need a currency formatting utility.
\\nCreate a utils/formatter.ts
file in the src
folder and add the following:
export class Formatter {\\n static currency(value: number, decimals = 2): string {\\n return new Intl.NumberFormat(\\"en-US\\", {\\n style: \\"currency\\",\\n currency: \\"USD\\",\\n maximumFractionDigits: decimals,\\n }).format(value);\\n }\\n}\\n\\n
The Formatter
class formats a number into a U.S. dollar currency string using the built-in Intl.NumberFormat
API.
Astro v4.15 introduced actions for seamless communication between your client and server code. Actions automatically handle JSON parsing and form data validation using Zod validation. It also allows you to define server functions for data fetching, custom logic, and standardize backend errors with the ActionError
object, reducing the amount of boilerplate compared to using an API endpoint.
Astro actions are defined as follows:
\\nimport { defineAction } from \'astro:actions\';\\nimport { z } from \'astro:schema\';\\n\\nexport const myAction = defineAction({...})\\n\\n
To make your actions accessible across the project, create an index.ts
file in the actions
folder and export a server containing all your actions:
import { myFirstAction, mySecondAction } from \\"./my-action\\";\\n\\nexport const server = {\\n myFirstAction, \\n mySecondAction\\n};\\n\\n
Now, your actions are available as functions in the astro:actions
module. To access them, import actions
from astro:actions
and call them on the client-side using a <script>
tag in an Astro component, or within a UI framework component, or a form POST request:
src/pages/index.astro\\n---\\n---\\n\\n<script>\\nimport { actions } from \'astro:actions\';\\n\\nasync () => {\\n const { data, error } = await actions.myFirstAction();\\n}\\n</script>\\n\\n
To access it on the server, wrap the action with Astro.callAction
as follows:
import { actions } from \'astro:actions\';\\nconst { data, error } = await Astro.callAction(actions.myFirstAction,{});\\n\\n
Before we dive into the implementation of authentication in Astro, let’s review the project’s authentication flow. As you can see in the diagram above, we have login and register pages for entering user credentials.
\\nAs users navigate to the register page and submit their credentials, the server receives those credentials, hashes the password for security, and stores the credentials in the database. Then the server sends a response to the browser to set a cookie to track the active user and verify a visitor’s identity. As users navigate to the login page and submit their credentials, the server receives those credentials, validates the credentials by comparing the email and password against the email and password in the database.
\\nThis section covers building a user authentication system with pages and server actions for user registration and login, hashing user passwords securely using bcrypt.
\\nBy default, Auth.js doesn’t include custom properties like role
in its types. So we’ll augment Auth.js properties so TypeScript recognizes those additions.
Create auth.d.ts
in the project’s root folder and add the following:
import { DefaultSession, DefaultUser } from \\"@auth/core/types\\";\\ndeclare module \\"@auth/core/types\\" {\\n interface User extends DefaultUser {\\n role?: string;\\n }\\n interface Session extends DefaultSession {\\n user: User;\\n }\\n}\\n\\n
This gives you type safety and autocompletion when accessing session.user.role
in your project.
Next, create env.d.ts
in the project’s src
folder and add the following:
/// <reference path=\\"../.astro/db-types.d.ts\\" />\\n/// <reference path=\\"../.astro/actions.d.ts\\" />\\n/// <reference types=\\"astro/client\\" />\\ninterface User {\\n email: string;\\n name: string;\\n}\\ndeclare namespace App {\\n interface Locals {\\n isLoggedIn: boolean;\\n isAdmin: boolean;\\n user: User | null;\\n }\\n}\\n\\n
This adds type safety to your server-side logic in Astro so that TypeScript knows what to expect when accessing locals
in Astro’s server-side context.
Next, set up user authentication using the auth-astro
package, with a custom credentials-based login system (email and password).
Create auth.config.mts
in the project’s root folder and add the following:
import { defineConfig } from \\"auth-astro\\";\\nimport Credentials from \\"@auth/core/providers/credentials\\";\\nimport { db, User, eq } from \\"astro:db\\";\\nimport bcrypt from \\"bcryptjs\\";\\n\\nexport default defineConfig({\\n providers: [\\n Credentials({\\n credentials: {\\n email: { label: \\"Mail\\", type: \\"email\\" },\\n password: { label: \\"Password\\", type: \\"password\\" },\\n },\\n authorize: async ({ email, password }) => {\\n const [user] = await db\\n .select()\\n .from(User)\\n .where(eq(User.email, `${email}`));\\n if (!user) throw new Error(\\"User not found\\");\\n if (!bcrypt.compareSync(password as string, user.password))\\n throw new Error(\\"Invalid credentials\\");\\n const { password: _, ...rest } = user;\\n return rest;\\n },\\n }),\\n ],\\n});\\n\\n
The credentials
property enables a login form where users can enter their email and password used in the authorize
function to authenticate the user. The authorize
function queries the database for a user matching the provided email. If no user is found, it throws a \\"User not found\\"
error. If a user is found, it verifies the password by comparing the provided one with the hashed password stored in the database using bcrypt.compareSync()
. If the password doesn’t match, it throws an \\"Invalid credentials\\"
error. When the credentials are valid, it returns the user object without the password field for security.
Next, implement callback functions to handle user sessions. Update auth.config.mts
with the following:
...\\nimport type { AdapterUser } from \\"@auth/core/adapters\\";\\n\\nexport default defineConfig({\\n providers: [\\n ...\\n ],\\n callbacks: {\\n jwt: ({ token, user }) => {\\n if (user) {\\n token.user = user;\\n }\\n return token;\\n },\\n session: ({ session, token }) => {\\n session.user = token.user as AdapterUser;\\n return session;\\n },\\n },\\n});\\n\\n
The jwt
callback runs when a JWT (JSON Web Token) is created or updated. If a user
is present (usually right after login), it attaches the user info to the token
.
The session
callback adds the user info from the token
into the session object so it’s accessible throughout your app.
Now, register the auth configurations in astro.config.mjs
as follows:
import { defineConfig } from \'astro/config\';\\nimport auth from \\"auth-astro\\";\\nexport default defineConfig({\\n integrations: [\\n auth({\\n configFile: \'./auth.config.mts\' // Explicitly specify the .mts extension\\n })\\n ],\\n});\\n\\n
Because Astro middleware runs on every incoming request, in order not to fetch the session each time, we’ll define a middleware to store locals
values (isLoggedIn
, isAdmin
, user
) that will be used across the app.
Next, create middleware.ts
in the project’s src
folder and add the following:
import { defineMiddleware } from \\"astro:middleware\\";\\nimport { getSession } from \\"auth-astro/server\\";\\nconst notAuthenticatedRoutes = [\\"/login\\", \\"/register\\"];\\n\\nexport const onRequest = defineMiddleware(\\n async ({ url, locals, redirect, request }, next) => {\\n const session = await getSession(request);\\n const isLoggedIn = !!session;\\n const user = session?.user;\\n locals.isLoggedIn = isLoggedIn;\\n locals.user = null;\\n locals.isAdmin = false;\\n if (isLoggedIn && user) {\\n locals.user = {\\n name: user.name!,\\n email: user.email!,\\n };\\n locals.isAdmin = user.role === \\"admin\\";\\n }\\n\\n return next();\\n }\\n);\\n\\n
With this, if the user is authenticated, their name and email are saved in locals.user
and accessed across the app.
Now, we can create the login and register pages.
\\nCreate login.astro
in the pages
folder and add the following:
---\\nimport AuthLayout from \\"@/layouts/AuthLayout.astro\\";\\n---\\n<AuthLayout>\\n <div class=\\"flex justify-center self-center z-10\\">\\n <div class=\\"p-12 bg-white mx-auto rounded-lg w-[500px]\\">\\n <div class=\\"mb-4\\">\\n <h3 class=\\"font-semibold text-2xl text-gray-800\\">Login</h3>\\n <p class=\\"text-gray-500\\">Sign in to your account.</p>\\n </div>\\n <form class=\\"space-y-5\\">\\n <div class=\\"space-y-2\\">\\n <label class=\\"text-sm font-medium text-gray-700 tracking-wide\\"\\n >Email</label\\n >\\n <input\\n class=\\"w-full text-base px-4 py-2 border border-gray-300 rounded-lg focus:outline-none focus:border-black\\"\\n type=\\"email\\"\\n name=\\"email\\"\\n placeholder=\\"Enter your email\\"\\n />\\n </div>\\n <div class=\\"space-y-2\\">\\n <label class=\\"mb-5 text-sm font-medium text-gray-700 tracking-wide\\">\\n Password\\n </label>\\n <input\\n class=\\"w-full content-center text-base px-4 py-2 border border-gray-300 rounded-lg focus:outline-none focus:border-black\\"\\n type=\\"password\\"\\n name=\\"password\\"\\n placeholder=\\"Enter your password\\"\\n />\\n </div>\\n <div class=\\"flex items-center justify-between\\">\\n <div class=\\"text-sm flex items-center space-x-2\\">\\n <p>Don\'t have an account?</p>\\n <a href=\\"/register\\" class=\\"text-black font-semibold underline\\">\\n register\\n </a>\\n </div>\\n </div>\\n <div>\\n <button\\n type=\\"submit\\"\\n id=\\"btn-submit\\"\\n class=\\"disabled:bg-gray-300 w-full flex justify-center bg-black text-white p-3 rounded-md tracking-wide font-semibold shadow-lg cursor-pointer transition ease-in duration-500\\"\\n >\\n Login\\n </button>\\n </div>\\n </form>\\n </div>\\n </div>\\n</AuthLayout>\\n\\n
This creates the UI for the login page using Astro and Tailwind CSS, wrapped inside the AuthLayout
.
Next, update login.astro
with the following:
...\\n<script>\\n const form = document.querySelector(\\"form\\") as HTMLFormElement;\\n const btnSubmit = document.querySelector(\\"#btn-submit\\") as HTMLButtonElement;\\n const { signIn } = await import(\\"auth-astro/client\\");\\n form.addEventListener(\\"submit\\", async (e) => {\\n e.preventDefault();\\n btnSubmit.setAttribute(\\"disabled\\", \\"disabled\\");\\n const formData = new FormData(form);\\n const resp = await signIn(\\"credentials\\", {\\n email: formData.get(\\"email\\") as string,\\n password: formData.get(\\"password\\") as string,\\n redirect: false,\\n });\\n\\n if (resp) {\\n alert(resp)\\n btnSubmit.removeAttribute(\\"disabled\\");\\n return;\\n }\\n window.location.replace(\\"/\\");\\n });\\n</script>\\n\\n
This script uses the signIn
function from auth-astro/client
for credentials-based login with the \\"credentials\\"
provider. It sends the email and password from the form, disabling the submit button during the request, handling errors, and redirecting the user on success.
Create register.astro
in the pages
folder and add the following:
---\\nimport AuthLayout from \\"@/layouts/AuthLayout.astro\\";\\n---\\n<AuthLayout>\\n <div class=\\"flex justify-center self-center z-10\\">\\n <div class=\\"p-12 bg-white mx-auto rounded-lg w-[500px]\\">\\n <div class=\\"mb-4\\">\\n <h3 class=\\"font-semibold text-2xl text-gray-800\\">Register</h3>\\n <p class=\\"text-gray-500\\">Create an account.</p>\\n </div>\\n <form class=\\"space-y-5\\">\\n <div class=\\"space-y-2\\">\\n <label class=\\"text-sm font-medium text-gray-700 tracking-wide\\"\\n >Name</label\\n >\\n <input\\n class=\\"w-full text-base px-4 py-2 border border-gray-300 rounded-lg focus:outline-none focus:border-black\\"\\n type=\\"text\\"\\n name=\\"name\\"\\n placeholder=\\"Enter your name\\"\\n />\\n </div>\\n <div class=\\"space-y-2\\">\\n <label class=\\"text-sm font-medium text-gray-700 tracking-wide\\"\\n >Email</label\\n >\\n <input\\n class=\\"w-full text-base px-4 py-2 border border-gray-300 rounded-lg focus:outline-none focus:border-black\\"\\n type=\\"email\\"\\n name=\\"email\\"\\n placeholder=\\"Enter your email\\"\\n />\\n </div>\\n <div class=\\"space-y-2\\">\\n <label class=\\"mb-5 text-sm font-medium text-gray-700 tracking-wide\\">\\n Password\\n </label>\\n <input\\n class=\\"w-full content-center text-base px-4 py-2 border border-gray-300 rounded-lg focus:outline-none focus:border-black\\"\\n type=\\"password\\"\\n name=\\"password\\"\\n placeholder=\\"Enter your password\\"\\n />\\n </div>\\n <div class=\\"flex items-center justify-between\\">\\n <div class=\\"text-sm flex items-center space-x-2\\">\\n <p>Already have an account?</p>\\n <a href=\\"/login\\" class=\\"text-black font-semibold underline\\">\\n Login\\n </a>\\n </div>\\n </div>\\n <div>\\n <button\\n type=\\"submit\\"\\n id=\\"btn-submit\\"\\n class=\\"disabled:bg-gray-300 w-full flex justify-center bg-black text-white p-3 rounded-md tracking-wide font-semibold shadow-lg cursor-pointer transition ease-in duration-500\\"\\n >\\n Register\\n </button>\\n </div>\\n </form>\\n </div>\\n </div>\\n</AuthLayout>\\n\\n
Similarly, this creates the UI for the register page using Astro and Tailwind CSS wrapped inside the AuthLayout
.
Next, update register.astro
with the following:
<script>\\n import { actions } from \\"astro:actions\\";\\n const form = document.querySelector(\\"form\\") as HTMLFormElement;\\n const btnSubmit = document.querySelector(\\"#btn-submit\\") as HTMLButtonElement;\\n form.addEventListener(\\"submit\\", async (e) => {\\n e.preventDefault();\\n btnSubmit.setAttribute(\\"disabled\\", \\"disabled\\");\\n const formData = new FormData(form);\\n const { error } = await actions.registerUser(formData);\\n if (error) {\\n alert(error);\\n btnSubmit.removeAttribute(\\"disabled\\");\\n return;\\n }\\n window.location.replace(\\"/login\\");\\n });\\n</script>\\n\\n
This script handles the registration form submission on the client side using Astro server actions. It disables the button during processing, sends the form data to a secure server-side handler (registerUser
action), handles errors gracefully, and redirects the user on success.
registerUser
server actionsCreate auth/register.action.ts
in the actions folder and add the following:
import { defineAction } from \'astro:actions\';\\nimport { z } from \\"astro:schema\\";\\nimport { db, User } from \'astro:db\';\\nimport { signIn } from \'auth-astro/client\';\\nimport bcrypt from \\"bcryptjs\\";\\nexport const registerUser = defineAction({\\n accept: \'form\',\\n input: z.object({\\n name: z.string().min(2),\\n email: z.string().email(),\\n password: z.string().min(6),\\n }),\\n handler: async ({ name, email, password }) => {\\n const user = {\\n name,\\n email,\\n password: bcrypt.hashSync(password),\\n role: \\"user\\",\\n }\\n return { ok: true };\\n },\\n});\\n\\n
The registerUser
server action handles user registration by validating input, hashing passwords, inserting the user into the DB, and logging them in.
The homepage will require a server action and a component to render product data.
\\ngetProducts
server actionWe need to implement a server action that retrieves a list of products from the database, including their associated images.
\\nCreate products/get-products.action.ts
in the actions
folder and add the following:
import type { ProductWithImages } from \\"@/interfaces\\";\\nimport { defineAction } from \\"astro:actions\\";\\nimport { db, sql } from \\"astro:db\\";\\nexport const getProducts = defineAction({\\n accept: \\"json\\",\\n handler: async () => {\\n const productsQuery = sql`\\n select a.*,\\n ( select GROUP_CONCAT(image,\',\') from \\n ( select * from ProductImage where productId = a.id)\\n ) as images\\n from Product a;\\n `;\\n const { rows } = await db.run(productsQuery);\\n const products = rows.map((product) => {\\n return {\\n ...product,\\n images: product.images ? product.images : \\"no-image.png\\",\\n };\\n }) as unknown as ProductWithImages[];\\n return {\\n products: products,\\n };\\n },\\n});\\n\\n
This action accepts JSON requests and runs a raw SQL query that selects all fields from the Product
table and, for each product, retrieves all associated images from the ProductImage
table where productId = a.id
, combining the image values into a single comma-separated string using GROUP_CONCAT
.
After executing the query and retrieving the results as rows
, it maps through each product, preserving all fields and assigning a fallback value of \\"no-image.png\\"
to the images
field if none exist. Finally, it returns the formatted product list as an object.
Update index.astro
with the following:
---\\nimport { actions } from \\"astro:actions\\";\\nimport MainLayout from \\"@/layouts/MainLayout.astro\\";\\nimport { ProductList } from \\"@/components\\";\\n\\nconst { data, error } = await Astro.callAction(actions.getProducts, {});\\nif (error) {\\n return Astro.redirect(\\"/\\");\\n}\\nconst { products } = data;\\n---\\n<MainLayout>\\n <h1 class=\\"text-3xl text-center my-4\\">Luxury Cars</h1>\\n <ProductList products={products} client:idle />\\n</MainLayout>\\n\\n
This page fetches product data via the getProducts
Astro action, handles errors with a redirect, and displays the results using the ProductList
component, all wrapped in MainLayout
(a shared layout).
ProductList
componentimport type { ProductWithImages } from \\"@/interfaces\\";\\nimport { ProductCard } from \\"./ProductCard\\";\\ninterface Props {\\n products: ProductWithImages[];\\n}\\nexport const ProductList = ({ products }: Props) => {\\n return (\\n <div className=\\"grid grid-cols-1 sm:grid-cols-2 md:grid-cols-3 place-items-center\\">\\n {products.map((product) => (\\n <ProductCard key={product.id} product={product} />\\n ))}\\n </div>\\n );\\n};\\n\\n
This React component receives a list of products (products
) as a prop. For each product in the array, it passes the product data to ProductCard
component as a prop.
ProductCard
componentimport type { ProductWithImages } from \\"@/interfaces\\";\\nimport { Formatter } from \\"@/utils\\";\\nimport { useState } from \\"react\\";\\ninterface Props {\\n product: ProductWithImages;\\n}\\nexport const ProductCard = ({ product }: Props) => {\\n const images = product.images.split(\\",\\").map((img) => {\\n return img.startsWith(\\"http\\")\\n ? img\\n : `${import.meta.env.PUBLIC_URL}/images/vehicles/${img}`;\\n });\\n const [currentImage, setCurrentImage] = useState(images[0]);\\n return (\\n <a href={`/products/${product.slug}`}>\\n <img\\n src={currentImage}\\n alt={product.name}\\n className=\\"h-[350px] w-[300px] object-cover\\"\\n onMouseEnter={() => setCurrentImage(images[1] ?? images[0])}\\n onMouseLeave={() => setCurrentImage(images[0])}\\n />\\n <div className=\\"space-y-1\\">\\n <h4>{product.name}</h4>\\n <p className=\\"font-medium\\">\\n Charges: <span className=\\"font-bold\\">{Formatter.currency(product.price)}</span> per day\\n </p>\\n <p className=\\"font-medium\\">\\n Brand:<span>{product.brand}</span>\\n </p>\\n <div>\\n {(Array.isArray(product.tags)\\n ? product.tags\\n : product.tags.split(\\",\\")\\n ).map((tag) => (\\n <span className=\\"bg-black text-white text-sm py-1.5 px-2 capitalize rounded-md mr-2\\">\\n {tag}\\n </span>\\n ))}\\n </div>\\n </div>\\n </a>\\n );\\n};\\n\\n
This React component, ProductCard
, accepts a product
prop of type ProductWithImages
and handles displaying the product’s image, name, price, brand, and tags.
To create a dynamic route for the product detail page, create the following file:
\\n/pages/products/[...slug].astro\\n\\n
[...slug]
is a dynamic segment that Astro uses to render different content based on the URL.
When a user visits /products/tesla-model-3
, the [slug].tsx
page gets the slug
from the URL, fetches the product details using that slug
, and renders a single product (vehicle) view.
Add the following to [...slug].astro
:
---\\nimport MainLayout from \\"@/layouts/MainLayout.astro\\";\\nimport { Formatter } from \\"@/utils\\";\\nimport { actions } from \\"astro:actions\\";\\n\\nconst { slug } = Astro.params;\\nconst { data, error } = await Astro.callAction(actions.getProductBySlug, slug ?? \\"\\");\\nif (error) return Astro.redirect(\\"/404\\");\\nconst { product, images } = data;\\nconst image = images[0].image.startsWith(\\"http\\")\\n ? images[0].image\\n : `${import.meta.env.PUBLIC_URL}/images/vehicles/${images[0].image}`;\\n---\\n<MainLayout\\n title={product.name}\\n description={product.description}\\n image={image}\\n>\\n <div>\\n <h2 class=\\"text-2xl mt-4 font-bold\\">{product.name}</h2>\\n <img src={image} alt=\\"product-detail image\\" class=\\"w-full h-full object-cover\\"/>\\n <section class=\\"grid grid-cols-1 sm:grid-cols-2 w-full gap-4\\">\\n <div class=\\"space-y-4\\">\\n <div>\\n <p class=\\"mb-1 font-semibold\\">Tags</p>\\n {(Array.isArray(product.tags) \\n ? product.tags \\n : product.tags.split(\\",\\"))\\n .map((tag) => (\\n <span\\n class=\\"bg-black text-white text-sm py-1.5 px-2 capitalize rounded-md mr-2 mb-2\\"\\n\\n >\\n {tag}\\n </span>\\n ))\\n }\\n </div>\\n <p class=\\"font-medium\\">Daily Charges: <span class=\\"font-bold text-2xl\\">{Formatter.currency(product.price)}</span></h2>\\n\\n <p class=\\"text-lg\\">Brand: <span class=\\"bg-black text-sm text-white py-1.5 px-3 rounded-md\\">{product.brand}</span></h3>\\n <div>\\n <h3 class=\\"mt-5\\">Description</h3>\\n <p>{product.description}</p>\\n </div>\\n </div>\\n <div>\\n <h3 class=\\"mt-5\\">Quantity</h3>\\n <div>\\n <button class=\\"btn-quantity\\">-</button>\\n <input type=\\"number\\" min=\\"1\\" value=\\"1\\" />\\n <button class=\\"btn-quantity\\">+</button>\\n </div>\\n <button\\n class=\\"mt-5 bg-black text-white p-3 w-full disabled:bg-gray-500\\"\\n >Proceed to Rent</button\\n >\\n </div>\\n\\n </section>\\n </div>\\n</MainLayout>\\n\\n
This creates the UI for a single product (vehicle) page using Astro and Tailwind CSS, wrapped inside the MainLayout
.
getProductBySlug
server actionNow, we’ll implement a server action that retrieves a single product from the database, including its associated images.
\\nCreate products/get-products-by-slug.action.ts
in the actions folder and add the following:
import { defineAction} from \\"astro:actions\\";\\nimport { z } from \\"astro:schema\\";\\nimport { Product, ProductImage, db, eq } from \\"astro:db\\";\\nexport const getProductBySlug = defineAction({\\n accept: \\"json\\",\\n input: z.string(),\\n handler: async (slug) => {\\n const [product] = await db\\n .select()\\n .from(Product)\\n .where(eq(Product.slug, slug));\\n if (!product) throw new Error(`Product with slug ${slug} not found.`);\\n const images = await db\\n .select()\\n .from(ProductImage)\\n .where(eq(ProductImage.productId, product.id));\\n return {\\n product: product,\\n images: images,\\n };\\n },\\n});\\n\\n
Authorization ensures that users can only access resources or perform actions they are allowed to, based on their authenticated identity or assigned roles. We’ll ensure only authorized (Admin) users can access the dashboard features.
\\nCreate an admin/dashboard.astro
file in the pages
folder and add the following:
---\\nimport { actions } from \\"astro:actions\\";\\nimport MainLayout from \\"@/layouts/MainLayout.astro\\";\\nimport { Formatter } from \\"@/utils\\";\\n\\nconst { data, error } = await Astro.callAction(actions.getProducts, {});\\nif (error) {\\n return Astro.redirect(\\"/\\");\\n}\\nconst { products } = data;\\n---\\n<MainLayout title=\\"Admin Dashboard\\">\\n <h1 class=\\"font-bold text-2xl\\">Dashboard</h1>\\n <pclass=\\"font-semibold text-lg\\">Product List</p>\\n <table class=\\"w-full mt-5\\">\\n <thead>\\n <tr>\\n <th class=\\"text-left\\">Image</th>\\n <th class=\\"text-left\\">Title</th>\\n <th class=\\"text-left\\">Daily Charges</th>\\n <th class=\\"text-left\\">Inventory</th>\\n </tr>\\n </thead>\\n <tbody>\\n {\\n products.map((product) => (\\n <tr>\\n <td>\\n {\\n product.images.length > 0 ? (\\n <img \\n src={`/images/vehicles/${product.images.split(\',\')[0]}`}\\n alt={product.name}\\n class=\\"w-16 h-16 mb-2\\"\\n />\\n ) : (\\n <img src=`/images/products/no-image.png` alt=\\"No image\\">\\n )\\n }\\n </td>\\n <td>\\n <a\\n class=\\"hover:underline cursor-pointer\\"\\n href={`/admin/products/${product.slug}`}\\n >\\n {product.name}\\n </a>\\n </td>\\n <td>{Formatter.currency(product.price)}</td>\\n <td class=\\"justify-end\\">{product.stock}</td>\\n </tr>\\n ))\\n }\\n </tbody>\\n </table>\\n</MainLayout>\\n\\n
This renders an admin dashboard page that displays a list of products from the server. It retrieves a list of products using the getProducts
action, processes the data, and dynamically displays the products in a table format.
We’ll create a dynamic route for the Update Product page, such that when a user visits /admin/products/tesla-model-3
, the [slug].tsx
page gets the slug
from the URL, fetches the product data using that slug
and renders a prefilled form with vehicle data.
Create product/[…slug].astro
in the admin
folder and add the following:
---\\nimport MainLayout from \\"@/layouts/MainLayout.astro\\";\\nimport { actions } from \\"astro:actions\\";\\n\\nconst { slug } = Astro.params;\\nconst { data, error } = await Astro.callAction(actions.getProductBySlug, slug ?? \\"\\");\\nif (error) {\\n return Astro.redirect(\\"/404\\");\\n}\\nconst { product, images } = data;\\n---\\n<MainLayout title=\\"Product update page\\">\\n <form >\\n <input type=\\"hidden\\" name=\\"id\\" value={product.id} />\\n <div class=\\"flex justify-between items-center\\">\\n <h1 class=\\"font-bold text-2xl\\">{product.name}</h1>\\n <button type=\\"submit\\" class=\\"bg-black mb-5 p-2 rounded text-white\\"\\n >Save Changes</button\\n >\\n </div>\\n <div class=\\"grid grid-cols-1 sm:grid-cols-2 gap-4\\">\\n <div>\\n <div class=\\"mb-4\\">\\n <label for=\\"name\\" class=\\"block\\">Name</label>\\n <input\\n type=\\"text\\"\\n id=\\"name\\"\\n name=\\"name\\"\\n value={product.name}\\n class=\\"w-full p-2 border border-gray-300 rounded\\"\\n />\\n </div>\\n <div class=\\"mb-4\\">\\n <label for=\\"slug\\" class=\\"block\\">Slug</label>\\n <input\\n type=\\"text\\"\\n id=\\"slug\\"\\n name=\\"slug\\"\\n value={product.slug}\\n class=\\"w-full p-2 border border-gray-300 rounded\\"\\n />\\n </div>\\n <div class=\\"mb-4\\">\\n <label for=\\"description\\" class=\\"block\\">Description</label>\\n <textarea\\n id=\\"description\\"\\n name=\\"description\\"\\n class=\\"w-full p-2 border border-gray-300 rounded\\"\\n rows=\\"8\\">{product.description}</textarea\\n >\\n </div>\\n </div>\\n <div>\\n <div class=\\"grid grid-cols-1 sm:grid-cols-2 gap-5\\">\\n <div class=\\"mb-4\\">\\n <label for=\\"price\\" class=\\"block\\">Daily Charges</label>\\n <input\\n type=\\"number\\"\\n id=\\"price\\"\\n name=\\"price\\"\\n value={product.price}\\n class=\\"w-full p-2 border border-gray-300 rounded\\"\\n />\\n </div>\\n <div class=\\"mb-4\\">\\n <label for=\\"stock\\" class=\\"block\\">Inventory</label>\\n <input\\n type=\\"number\\"\\n id=\\"stock\\"\\n name=\\"stock\\"\\n value={product.stock}\\n class=\\"w-full p-2 border border-gray-300 rounded\\"\\n />\\n </div>\\n </div>\\n <div class=\\"mb-4\\">\\n <label for=\\"brand\\" class=\\"block\\">Brand</label>\\n <input\\n type=\\"text\\"\\n id=\\"brand\\"\\n name=\\"brand\\"\\n value={product.brand}\\n class=\\"w-full p-2 border border-gray-300 rounded\\"\\n />\\n </div>\\n <div class=\\"mb-4\\">\\n <label for=\\"tags\\" class=\\"block\\"\\n >Tags <small class=\\"text-gray-500\\">(Separate with comas)</small\\n ></label\\n >\\n <input\\n type=\\"text\\"\\n id=\\"tags\\"\\n name=\\"tags\\"\\n value={product.tags}\\n class=\\"w-full p-2 border border-gray-300 rounded\\"\\n />\\n </div>\\n\\n <div class=\\"grid grid-cols-2 gap-4\\">\\n <div class=\\"mb-4\\">\\n <label for=\\"tags\\" class=\\"block\\">Type</label>\\n <select class=\\"w-full p-2 border border-gray-300 rounded\\" name=\\"type\\">\\n <option value=\\"\\">[ Select ]</option>\\n {\\n [\\n \\"COUPE\\",\\n \\"SEDAN\\",\\n \\"SPORTS CAR\\",\\n \\"CONVERTIBLE\\",\\n \\"TRUCK\\",\\n \\"STATION WAGON\\",\\n ].map((type) => (\\n <option\\n value={type}\\n class=\\"capitalize\\"\\n selected={type === product.type}\\n >\\n {type.toUpperCase()}\\n </option>\\n ))\\n }\\n </select>\\n </div>\\n </div>\\n </div>\\n </div>\\n </form>\\n</MainLayout>\\n\\n
This creates the UI for the Product Update Page for an admin to edit product details using Astro and Tailwind CSS, wrapped inside the MainLayout
. It uses the slug
from the URL to fetch a product’s data using the getProductBySlug
action. If no product is found (error
is returned), the user is redirected to a 404 page:
<script>\\n import { actions } from \\"astro:actions\\";\\n import { navigate } from \\"astro:transitions/client\\";\\n document.addEventListener(\\"astro:page-load\\", () => {\\n const form = document.querySelector(\\"form\\") as HTMLFormElement;\\n if (!form) {\\n return;\\n }\\n form.addEventListener(\\"submit\\", async (e) => {\\n e.preventDefault();\\n const formData = new FormData(form);\\n const { data, error } = await actions.updateProduct(formData);\\n if (error) {\\n return alert(error.message);\\n }\\n navigate(`/admin/products/${data.slug}`);\\n });\\n });\\n</script>\\n\\n
This script enables client-side handling of a product update form. It calls the updateProduct
server action through astro:actions
to submit the updated data. If the server returns an error, it displays an alert with the error message. If the update is successful, it uses astro:transitions/client
’s navigate()
function to redirect the user to the updated product’s admin page, all without a full page reload.
updateProduct
server actionWe’ll implement a server action that modifies product data in the database. Create products/update-product.action.ts
in the actions folder and add the following:
import { defineAction } from \\"astro:actions\\";\\nimport { z } from \\"astro:schema\\";\\nimport { Product, db, eq, ProductImage } from \\"astro:db\\";\\nimport { getSession } from \\"auth-astro/server\\";\\nimport { v4 as UUID } from \\"uuid\\";\\nexport const updateProduct = defineAction({\\n accept: \\"form\\",\\n input: z.object({\\n id: z.string().optional(),\\n description: z.string(),\\n price: z.number(),\\n brand: z.string(),\\n slug: z.string(),\\n stock: z.number(),\\n tags: z.string(),\\n name: z.string(),\\n type: z.string(),\\n }),\\n handler: async (form, { locals, request }) => {\\n const session = await getSession(request);\\n const user = session?.user;\\n const { isAdmin } = locals;\\n if (!user && !isAdmin) {\\n throw new Error(\\"Unauthorized\\");\\n }\\n const { id = UUID(), ...rest } = form;\\n rest.slug = rest.slug.toLowerCase().replaceAll(\\" \\", \\"_\\").trim();\\n const product = {\\n id: id,\\n user: user?.id!,\\n ...rest,\\n };\\n await db.update(Product).set(product).where(eq(Product.id, id))\\n return product;\\n },\\n});\\n\\n
The updateProduct
action handles form submissions for updating a product in the database, allowing only authenticated Admin users to operate.
Route protection is one of the easiest features to implement in Astro. Because Astro middleware runs on every incoming request, the route protection logic should be in the middleware.
\\nUpdate middleware.ts
with the following:
export const onRequest = defineMiddleware(\\n async ({ url, locals, redirect, request }, next) => {\\n ...\\n if (!locals.isAdmin && url.pathname.startsWith(\\"/admin\\")) {\\n return redirect(\\"/\\");\\n }\\n if (isLoggedIn && notAuthenticatedRoutes.includes(url.pathname)) {\\n return redirect(\\"/\\");\\n }\\n\\n return next();\\n }\\n);\\n\\n
This middleware protects admin-only pages and prevents logged-in users from accessing login/register routes. If the user is not an admin (locals.isAdmin
is false
) and they try to access any route that starts with /admin
, they are redirected to the homepage (\\"/\\"
).
In this tutorial, we explored integrating authentication in a partially static, partially dynamic environment, leveraging Astro’s server-side rendering (SSR) support and protecting static and dynamic routes in Astro. We also explored using Astro’s new SSR capabilities to store sessions, refresh tokens, and manage user state efficiently.
\\nIf you encounter any issues while following this tutorial or need expert help with web/mobile development, don’t hesitate to reach out on LinkedIn. I’d love to connect and am always happy to assist!
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nZod has long been a favorite for schema validation in TypeScript, valued for its simplicity and tight integration. With the release of Zod 4, the library takes a major leap forward, delivering significant performance boosts, new developer-friendly features, and improved support for modern web applications, sparking well-deserved excitement in the TypeScript community.
\\nIf you’ve checked out Reddit, YouTube, or any other forums recently, you’ve likely seen the buzz.
\\nWondering what all the fuss is about? In this article, we’ll break down what’s new, why it matters, and how Zod 4 could transform your workflow.
\\nWe’ll dive into the specific features below, but here are some of the potential results developers are already seeing:
\\n@zod/mini
keep validation fast without adding overhead to your frontend.toJSON()
eliminates third-party tools, speeding up workflows with form builders, APIs, and backend systemsZod 4 introduces deep internal optimizations that make parsing up to 3x faster and more memory-efficient, especially for complex, nested schemas. These improvements provide significant benefits for server-side and large-scale validations.
\\nThe core Zod bundle is approximately 57% smaller in version 4, making it well-suited for front-end projects where performance and load times are critical. Whether you’re building a lightweight app or a complex enterprise dashboard, the leaner build helps keep your application fast and efficient.
\\nPrevious versions could also slow down TypeScript tooling, particularly in large schemas or monorepos. Zod 4 addresses this with a 20× reduction in compiler instantiations, resulting in faster type-checking, smoother IDE performance, and quicker builds.
\\nZod 4 introduces @zod/mini
, a lightweight, function-based alternative to the main Zod library, optimized for minimal bundle size in environments like edge functions, serverless platforms, or performance-sensitive front-end apps. Instead of methods like .min()
or .trim()
, @zod/mini
uses standalone functions passed into .check()
to improve tree-shaking efficiency.
While @zod/mini
has a reduced API compared to Zod, it still supports essential validation features and methods like .parse()
, .safeParse()
, and their async versions. It is fully compatible with Zod schemas, allowing seamless switching from full Zod during development to @zod/mini
in production without sacrificing type safety or developer experience.
In previous versions of Zod, developers relied on third-party libraries like zod-to-json-schema to convert Zod schemas into JSON Schema, adding unnecessary complexity to the workflow. Zod 4 introduces built-in support for this conversion, eliminating the need for external tools.
\\nWith this new functionality, developers can convert Zod schemas to JSON Schema in just a few lines of code, enabling seamless integration with tools and systems that rely on JSON Schema.
\\nThe following example demonstrates how to convert a Zod schema to JSON Schema using Zod 4’s built-in .toJSON()
method:
import { z } from \\"zod\\";\\nconst userSchema = z.object({\\n id: z.string().uuid(),\\n name: z.string().min(1),\\n email: z.string().email(),\\n age: z.number().int().positive().optional(),\\n isAdmin: z.boolean().default(false),\\n});\\nconst jsonSchema = userSchema.toJSON();\\nconsole.log(JSON.stringify(jsonSchema, null, 2));\\n\\n
The code above will output the following JSON Schema:
\\n{\\n \\"type\\": \\"object\\",\\n \\"properties\\": {\\n \\"id\\": { \\"type\\": \\"string\\", \\"format\\": \\"uuid\\" },\\n \\"name\\": { \\"type\\": \\"string\\", \\"minLength\\": 1 },\\n \\"email\\": { \\"type\\": \\"string\\", \\"format\\": \\"email\\" },\\n \\"age\\": { \\"type\\": \\"number\\" },\\n \\"isAdmin\\": { \\"type\\": \\"boolean\\", \\"default\\": false }\\n },\\n \\"required\\": [\\"id\\", \\"name\\", \\"email\\"]\\n}\\n\\n
Zod 4 introduces the Global Registry (z.globalRegistry
), a centralized system for managing and reusing schemas and their metadata, such as id, title, description, and examples. You can register schemas using .meta()
or .register()
, and when serialized with .toJSON()
, Zod automatically adds them to a shared $defs
section for seamless $ref-based
referencing.
This makes it easier to maintain consistent validation across large applications and is especially valuable for generating clean, reusable JSON Schema definitions in API specifications. The code below demonstrates how to use z.globalRegistry
to manage Zod schemas:
import { z } from \\"zod\\";\\nconst userSchema = z.object({\\n id: z.string().uuid(),\\n name: z.string(),\\n email: z.string().email(),\\n}).meta({\\n id: \\"User\\",\\n title: \\"User\\",\\n description: \\"A registered user in the system\\",\\n});\\nuserSchema.register(z.globalRegistry);\\nconst postSchema = z.object({\\n title: z.string(),\\n author: userSchema,\\n}).meta({ id: \\"Post\\" });\\nconst definitions: Record<string, any> = {};\\nconst jsonSchema = postSchema.toJSON({ target: \\"jsonSchema7\\", definitions });\\nconsole.log(\\"Post Schema:\\", JSON.stringify(jsonSchema, null, 2));\\nconsole.log(\\"Definitions:\\", JSON.stringify(definitions, null, 2));\\n\\n
Zod 4 introduces a powerful error pretty-printing feature that makes validation messages more readable and actionable. This enhancement significantly improves error reporting and debugging by converting a ZodError
into a well-formatted, multi-line string for clear and user-friendly output.
The example below demonstrates how to use the z.prettifyError
function to effectively handle errors in Zod 4:
import { z } from \\"zod\\";\\nconst userSchema = z.object({\\n username: z.string().min(5),\\n age: z.number().positive(),\\n});\\nconst invalidData = {\\n username: \\"abc\\",\\n age: -5,\\n};\\nconst result = userSchema.safeParse(invalidData);\\nif (!result.success) {\\n console.log(z.prettifyError(result.error));\\n}\\n\\n
The type errors generated from the code above will produce the following readable output:
\\n✖ Invalid input: expected string, received string\\n → at username\\n✖ Invalid input: expected number, received number\\n → at age\\n\\n
In earlier versions of Zod, file validation required using z.instanceof(File)
combined with custom refinements, which introduced unnecessary boilerplate and limited flexibility. Zod 4 simplifies this process with native support for file validation via the new z.file()
schema.
This purpose-built API streamlines file upload handling in browser-based applications. Developers can easily enforce constraints such as minimum and maximum file size (in bytes) using .min()
and .max()
, and validate specific MIME types using .type()
.
The example below shows how to validate an uploaded file’s type, ensure it falls within an acceptable size range, and confirm it matches the required file extension:
\\nimport { z } from \\"zod\\";\\nconst fileSchema = z\\n .file()\\n .min(10_000, { message: \\"File is too small (min 10KB).\\" })\\n .max(1_000_000, { message: \\"File is too large (max 1MB).\\" })\\n .type(\\"image/png\\", { message: \\"Only PNG files are allowed.\\" });\\nfunction handleFileUpload(file: File) {\\n const result = fileSchema.safeParse(file);\\n if (!result.success) {\\n console.error(\\"File validation failed:\\", result.error.format());\\n alert(\\"Invalid file: \\" + result.error.errors[0].message);\\n return;\\n }\\n console.log(\\"Valid file:\\", result.data);\\n}\\n\\n
While still in beta, Zod 4 brings significant improvements, including better performance, new features, and changes that enhance developer workflow. To upgrade, run the following command in your terminal:
\\nnpm upgrade zod@next\\n\\n
After upgrading, update your application code to align with Zod 4’s schemas. Several schemas and methods have been deprecated or replaced:
\\nmessage
and .errorMap()
— Replaced by error functions with a defined precedence hierarchy where schema-level overrides take precedence over parse-level overrides.create()
factories — Deprecated in favor of using native z.object()
declarations directly, enhancing cleaner and more standard schema construction.literal(Symbol)
— Deprecated due to the non-serializable nature of Symbol and inconsistent support. It’s recommended to use explicit schema validation with custom refinements instead.nonempty()
type behavior — Now behaves identically to z.array().min(1)
. The inferred type has changed from a tuple to a simple array (T[]
)z.record()
— The single-argument usage has been dropped. Developers must now specify both key and value schemas explicitlyZod 4 is not just an update; it’s a leap forward for schema validation in TypeScript that truly lives up to the hype. With major performance improvements, built-in JSON Schema conversion, the lightweight @zod/mini
variant, enhanced error reporting, and native file validation, Zod 4 provides powerful features that make validation easier and more efficient. Whether you’re building lightweight front ends or complex server-side systems, Zod 4 streamlines validation at every level.
If you are new to Zod, you can learn more about how to use schema validation in TypeScript with zod from this article.
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nActing as the middleman between your users and your website, cursors can either limit or greatly enhance the way your users experience your site. This is why sleek, intentionally designed, custom cursors have become a significant part of UI and UX today.
\\nCustom cursors are an opportunity to give your users direction in an engaging way and create a memorable, immersive experience for them on your website.
\\nIn this tutorial, we’ll take a look at what custom cursors are and learn how to use CSS (and JavaScript) to create custom cursors that will give your website a creative edge. To follow along with this tutorial, you should have some knowledge of HTML, CSS, and JavaScript.
\\nEditor’s note: This article was last updated by Saleh Mubashar in May 2025 to provide more targeted advice on building custom cursors.
\\nWe already interact with custom cursors every day. When you hover over buttons and the pointer cursor changes to a hand, or you hover over some text and the cursor changes to a text cursor, this interactivity is achieved through custom cursors.
\\nHowever, there are many other creative experiences we can provide to our users with cursors. Before we dive into creating custom cursors, you should know that CSS provides you with cursors out of the box for some frequently performed tasks.
\\nThese cursors show you what can be done at the exact location you are hovering over. Examples include cursors indicating that you should click links, drag and drop elements, zoom in and out on things, and more.
\\nCursor value | \\nDescription | \\n
---|---|
alias | \\nAn alias or shortcut can be created | \\n
all-scroll | \\nScroll in any direction | \\n
auto | \\nDefault value – the browser pick a cursor | \\n
cell | \\nSelect a table cell | \\n
col-resize | \\nResize columns | \\n
context-menu | \\nOpens a menu | \\n
copy | \\nCopy an item | \\n
crosshair | \\nCross cursor indicating precise selection | \\n
default | \\nStandard cursor | \\n
e-resize / w-resize | \\nResize to the right / left | \\n
grab | \\nDrag an item | \\n
grabbing | \\nItem is being dragged | \\n
help | \\nHelp info is available | \\n
move | \\nAn item can be moved | \\n
n-resize / s-resize | \\nResize upwards/downwards | \\n
ne-resize / nesw-resize / sw-resize | \\nResize top right diagonally | \\n
no-drop | \\nCan’t drop an item | \\n
none | \\nHidden cursor | \\n
not-allowed | \\nAction not allowed | \\n
nw-resize / nwse-resize / se-resize | \\nResize top left diagonally | \\n
pointer | \\nClickable item | \\n
progress | \\nLoading but interactive | \\n
row-resize | \\nResize rows | \\n
text | \\nSelect text | \\n
vertical-text | \\nSelect vertical text | \\n
wait | \\nLoading, not interactive | \\n
zoom-in / zoom-out | \\nZoom in / zoom out | \\n
Hover over the boxes below to see the cursors in action:
\\nSee the Pen
\\nUntitled by Samson Omojola (@Caesar222)
\\non CodePen.
Check out the complete list of CSS cursors here.
\\nWhile these cursors are useful and have some basic styling, we can certainly get more creative with custom cursors.
\\nCreating a custom cursor with CSS is a pretty straightforward process. The first step is to find the image you want to use to replace the default cursor. You can either design one yourself or get a free PNG that suits your needs from an icon library such as FontAwesome.
\\nNext, to create the custom cursor, use the cursor
property with the url()
function. We will pass the image location to the cursor using the url
function:
body {\\n cursor: url(\'path-to-image.png\'), auto;\\n}\\n\\n
To ensure that this cursor is used on all parts of your website, the best place to use the cursor
property is in the body
tag of your HTML. However, if you want, you can assign custom cursors to specific elements instead of the whole website.
You can also add a fallback
value to your cursor
property. When using custom CSS properties, this value ensures that if the image that serves as your custom property is missing or cannot be loaded, then your users will have another option.
In this case, auto
is the fallback
descriptor for your custom cursor
property. Your users will see the regular cursor if the custom one is unavailable.
You can also provide more than one custom cursor (multiple fallbacks) for your website. All you have to do is add their paths to the cursor
property:
body {\\n cursor: url(\'path-to-image.png\'), url(\'path-to-image-2.svg\'), url(\'path-to-image-3.jpeg\'), auto;\\n}\\n\\n
There are three fallback cursors in the code above.
\\n\\nBecause they draw attention to elements you want to highlight on your website, custom cursors are best used in specific scenarios, such as:
\\nA few tips to keep in mind while creating custom cursors include:
\\n.png
or .svg
images for transparencySay you have a table and you’d like the mouse cursor to change to a pointer (i.e., the hand icon) whenever a user hovers over a row in the table. You can use the CSS cursor
property to achieve this.
Here’s an example:
\\n<style>\\n /* Style the table */\\n table {\\n font-family: arial, sans-serif;\\n border-collapse: collapse;\\n width: 100%;\\n }\\n\\n /* Style the table cells */\\n td, th {\\n border: 1px solid #dddddd;\\n text-align: left;\\n padding: 8px;\\n }\\n\\n /* Style the table rows */\\n tr:hover {\\n cursor: pointer;\\n }\\n</style>\\n\\n<table>\\n <tr>\\n <th>Name</th>\\n <th>Age</th>\\n <th>City</th>\\n </tr>\\n <tr>\\n <td>John</td>\\n <td>30</td>\\n <td>New York</td>\\n </tr>\\n <tr>\\n <td>Jane</td>\\n <td>25</td>\\n <td>Chicago</td>\\n </tr>\\n <tr>\\n <td>Bill</td>\\n <td>35</td>\\n <td>Los Angeles</td>\\n </tr>\\n</table>\\n\\n
In the above code, we use the tr:hover
selector to apply the cursor
property to all table rows when the mouse hovers over them. The cursor
property is set to pointer
, which changes the mouse cursor to a hand icon.
To hide the mouse cursor with CSS, you can use the cursor
property and set its value to none
.
Here’s an example:
\\n<style>\\n /* Style the body element */\\n body {\\n cursor: none;\\n }\\n</style>\\n\\n<body>\\n <!-- Your content goes here --\x3e\\n</body>\\n\\n
This will hide the mouse cursor throughout the entire webpage. If you only want to hide the mouse cursor for a specific element, you can apply the cursor
property to that individual element instead of the body
element.
There are several situations in which hiding the mouse cursor might be useful, such as:
\\nRemember that hiding the mouse cursor can be confusing or disorienting for some users, depending on the use case. This strategy should be used carefully and only when necessary.
\\nWhile custom cursors can be created using CSS, JavaScript offers additional advantages. Before we discuss that, let’s look at the advantages and disadvantages of creating custom cursors with CSS and JavaScript.
\\nThere are numerous reasons why it is preferable to create cursors with CSS:
\\nThe primary drawback of using CSS for custom cursors is the limited ability to add animations or advanced customizations.
\\n\\nThis is where JavaScript comes in. JavaScript allows for more advanced interactions when users engage with the cursor—for example, hovering, clicking, or moving over specific elements. By listening to specific events, the cursor’s movements can then be updated and also be easily animated.
\\nCreating a custom cursor with JavaScript involves manipulating DOM elements. We’ll create some DOM elements, which will serve as our custom cursor, and then use JavaScript to manipulate them. Then, as we move our cursor around, those custom elements will move around as our cursor.
\\nInstead of using or downloading an image, we’ll design an animated cursor using CSS to make it more engaging. Move your cursor around the box below to see an example:
\\nSee the Pen
\\nUntitled by Samson Omojola (@Caesar222)
\\non CodePen.
As you can see, the cursor consists of two elements: a large circle and a small circle. We’ll create two div
elements and assign them class names:
<div class=\\"cursor small\\"></div>\\n<div class=\\"cursor big\\"><div>\\n\\n
Next, we’ll style the circles using CSS. The big circle will have a width and height of 50px
and will be shaped into a circle using border-radius: 50%
.
The small circle will be hollow, so we’ll define a border with a border-radius
of 50%
and set its width and height to 6px
each. We also disable the default cursor by setting cursor: none
so that our custom cursor can take its place.
To animate the big circle, we’ll use @keyframes
. The animation lasts 2s
, starting with a background-color
of green and an opacity of 0.2
. At the midpoint, the color changes to orange, and by the end, it turns red. We set animation-iteration-count
to infinite
to make the animation loop continuously:
body {\\n background-color: #171717;\\n cursor: none;\\n height: 120vh;\\n}\\n\\n.small {\\n width: 6px;\\n height: 6px;\\n border: 2px solid #fff;\\n border-radius: 50%;\\n}\\n\\n.big {\\n width: 50px;\\n height: 50px;\\n border-radius: 50%;\\n animation-name: stretch;\\n animation-duration: 2s;\\n animation-timing-function: ease-out;\\n animation-direction: alternate;\\n animation-iteration-count: infinite;\\n}\\n\\n@keyframes stretch {\\n 0% {\\n opacity: 0.2;\\n background-color: green;\\n border-radius: 100%;\\n }\\n 50% {\\n background-color: orange;\\n }\\n 100% {\\n background-color: red;\\n }\\n}\\n\\n
Now, to make the elements follow the mouse movement, we’ll use JavaScript. The script below listens for mouse movement on the webpage. When the user moves their mouse, the function retrieves the x
and y
coordinates and updates the position of both div
elements accordingly:
const cursorSmall = document.querySelector(\'.small\');\\nconst cursorBig = document.querySelector(\'.big\');\\n\\nconst positionElement = (e) => {\\n const mouseX = e.clientX;\\n const mouseY = e.clientY;\\n\\n cursorSmall.style.transform = `translate3d(${mouseX}px, ${mouseY}px, 0)`;\\n cursorBig.style.transform = `translate3d(${mouseX}px, ${mouseY}px, 0)`;\\n};\\n\\nwindow.addEventListener(\'mousemove\', positionElement);\\n\\n
See the complete code alongside the interactive cursor in the below CodePen:
\\nSee the Pen
\\nUntitled by Samson Omojola (@Caesar222)
\\non CodePen.
Here’s how it works:
\\nquerySelector
to access the two div
elementspositionElement
function retrieves the current mouse x
and y
coordinatestransform: translate3d()
property for both cursor elements, moving them accordinglytransform
repositions elements in both horizontal and vertical directions, while translate3d
adjusts their position in 3D spaceCustom cursors can make a website feel unique, but they can also be annoying or distracting if overused. Many people find them frustrating, especially if they make navigation harder. A cursor should help users, not get in their way.
\\nBefore adding a custom cursor, ask yourself if it actually improves the experience or if it’s just for looks. Also, keep in mind that not all browsers support fancy cursor effects, especially older ones. Here’s the browser compatibility data for the cursor
property from CanIUse:
To keep things user-friendly, use custom cursors sparingly and make sure they fit the design. If possible, give users the option to turn them off so they can stick with the default system cursor if they want.
\\nCustom cursors might seem like a fun way to personalize a website, but they can cause serious accessibility issues. Many people rely on built-in OS features to modify their cursors, such as increasing size or using high-contrast colors. These changes help users with low vision or motor impairments navigate their devices more easily.
\\nWhen a website overrides these modifications with a custom CSS cursor, it can make the experience frustrating—or even unusable—for some users.
\\nIf you must use a custom cursor, make sure to:
\\nprefers-reduced-motion
to disable custom cursors for users who find them distracting:@media (prefers-reduced-motion: reduce) {\\n *{\\ncursor: auto; /* Reverts to the default cursor */\\n }\\n}\\n
aria-hidden=\\"true\\"
to the cursor elements to prevent them from being picked upAt the end of the day, a cursor should enhance usability, not get in the way. If there’s any chance a custom cursor could make a website harder to use, it’s best to avoid it altogether. I would also suggest reading this excellent article by Eric Bailey on the drawbacks of custom cursors. He makes a bunch of really good points.
\\nIn this tutorial, we discussed built-in CSS cursors, creating custom cursors with CSS, using multiple cursors, and adding animations with CSS and JavaScript. We also covered the pros and cons of using CSS vs. JavaScript for custom cursors, when to go beyond default options and accessibility factors to keep in mind.
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nModern React applications require more than polished visuals; they demand modular, reusable components that are consistent and easy to maintain. Yet, turning high-fidelity designs into production-ready code remains one of the most fragmented steps in the development process. Designers hand off mockups, developers interpret them, and along the way, alignment often breaks down, which results in duplicated effort, inconsistent UI, and inevitable delayed delivery.
\\nIn this article, you’ll learn how to take an open source design from Penpot, a collaborative, Figma-like design tool, and transform it into a fully functional React component, documented and tested with Storybook. The goal is to create a repeatable approach that produces clean, maintainable UI code that stays aligned with the design.
\\nPenpot is an open source design and prototyping tool built for designers and developers to collaborate. It is browser-based and platform-independent, meaning anyone can access it without installing special software, unlike traditional tools that lock users into a specific workflow or platform.
\\nDesigners often feel limited by tools like Figma or Sketch, which can restrict access due to paid plans, platform dependency (e.g., macOS only), or lack of real-time collaboration with developers who live in code. Penpot removes those limitations by being fully open source, self-hostable, and accessible to anyone on any operating system.
\\nPenpot builds its layout engine on Flexbox, which mirrors how modern UIs are actually implemented in code. This results in less friction during handoff, which means what designers build in Penpot looks and behaves closer to the real web product.
\\nAt first glance, Penpot looks a lot like Figma; it has a clean interface, drag-and-drop boards, prototyping flows, team collaboration, and components. But its foundational principles and use cases set it apart.
\\nPenpot is fully open source and browser-based. You can use it in the cloud or host it yourself. That flexibility matters when you hit a paywall in Figma or worry about storing sensitive design systems in a closed ecosystem. With Penpot, design teams can own their design stack, which opens doors for compliance, customization, and cost control.
\\nIn Penpot’s latest version, it delivers features that speak directly to modern design-dev pain points:
\\nThese features are intentionally built to reflect how real design teams work, especially when developers are part of a team. For example, layout tools in Penpot map directly to frontend systems. You don’t have to guess how a frame will behave on different screens. You can design with confidence, knowing your layout will hold up in code. This alignment saves time, cuts down rework, and improves communication between roles.
\\nThe inspect mode is also a huge win. It gives developers clean, copyable HTML and CSS — no more digging for spacing values or guessing at alignments. Best of all, Penpot is free. You can use it for open source projects, teams with strict data policies, organizations that want to avoid vendor lock-in, and design systems with complex dev handoff needs.
\\nSo, is Penpot a Figma replacement? It depends on your context. If you’re a solo designer in a Figma-heavy organization, switching might not be worth it. But if you’re part of a team that values open tooling, frontend-friendly layouts, and cost flexibility, then Penpot isn’t just an alternative. It’s a design tool built to think like a developer:
\\nFeature/Capability | \\nPenpot | \\nFigma | \\n
---|---|---|
Licensing | \\nFully open source, free forever | \\nProprietary, freemium model with paid tiers | \\n
Hosting options | \\nCloud-based or self-hosted (ideal for compliance or privacy) | \\nCloud-only, no self-hosting | \\n
Platform support | \\nBrowser-based, works across all OS platforms | \\nBrowser and desktop apps, with limited functionality offline | \\n
Collaboration | \\nReal-time team collaboration | \\nReal-time team collaboration | \\n
Design-to-code alignment | \\nUses true CSS Flexbox and Grid layout models | \\nUses custom layout engine, may differ from actual CSS | \\n
Developer inspect mode | \\nExposes clean, copyable HTML/CSS for every element | \\nShows properties but lacks semantic code output | \\n
Component system | \\nBuilt on atomic design principles | \\nStrong component system, but not atomic-first | \\n
Community and customization | \\nOpen community, extensible via open codebase | \\nClosed ecosystem, limited to Figma plugins | \\n
Vendor lock-in | \\nNone, teams can fully own their stack | \\nYes, locked into Figma’s cloud and ecosystem | \\n
Use cases | \\nBest for dev-heavy teams, open source projects, privacy-first orgs | \\nBest for fast prototyping, solo designers, and orgs already using Figma | \\n
Cost | \\nFree forever | \\nFree tier available; paid plans for advanced features | \\n
You need to set up the environment before we begin building:
\\nVisit Penpot’s website and sign up for a free account. You’ll be able to create designs and export assets directly from the platform.
\\nNext, set up a basic React project. Use the following commands to create a new project using either Create React App or Vite:
\\nnpx create-react-app penpot-storybook-demo\\nnpm create vite@latest penpot-storybook-demo --template react\\n\\n
Then, navigate to your project directory:
\\ncd penpot-storybook-demo\\n\\n
Now, install Storybook into your project by running the following command:
\\nnpx storybook@latest init\\n\\n
This will set up Storybook in your project and configure it to run on localhost:6006:
\\nnpm run storybook\\n\\n
Storybook should open in your browser, ready to start building UI components.
\\nNow that we have our development environment ready, it’s time to design the UI in Penpot:
\\nOpen Penpot and create a new design. Let’s build an application form for context. Add text fields for the username and password, a button to submit the form, and labels to guide the user. Use a simple layout with a grid or columns to keep everything aligned neatly.
\\n\\nHere’s an example of what the application form might look like:
\\nUse Penpot’s vector tools to draw the basic shapes for your layout; rectangles, ellipses, and lines are all good options. Adjust the size, corner radius, and position of each shape until everything fits together properly. This results in a clear, structured design that can be easily rebuilt in code later:
\\nWhen you finish your design, export the parts you’ll need in your code. These might be SVG icons, PNG images, or a design spec with details like colors and spacing.
\\nTo do this in Penpot, click on the elements you want to export. Then, choose a file format that works best for your project:
\\nWith the design in hand, we can now start building the actual React component.
\\nOpen your React project. Inside the src
directory, create a new folder called components
. This folder will hold all your UI components.
Inside components
, create a new file named LoginForm.js
. This will house your login form:
mkdir src/components\\ntouch src/components/LoginForm.js\\n\\n
Use the design you created in Penpot as your reference. You can write the form using Tailwind CSS for quick layout and styling. If you prefer CSS Modules or plain CSS, feel free to swap that in. Here’s a basic version of the form using Tailwind:
\\nimport React, { useState, useEffect } from \'react\';\\nexport default function LoginForm() {\\n const [username, setUsername] = useState(\'\');\\n const [password, setPassword] = useState(\'\');\\n const [error, setError] = useState(\'\');\\n const handleSubmit = (e) => {\\n e.preventDefault();\\n if (!username || !password) {\\n setError(\'All fields are required\');\\n return;\\n }\\n setError(\'\');\\n console.log(\'Submitted form:\', { username, password });\\n };\\n useEffect(() => {\\n console.log(\'LoginForm component mounted\');\\n }, []);\\n return (\\n <form onSubmit={handleSubmit} className=\\"space-y-4 max-w-sm mx-auto\\">\\n <div>\\n <label htmlFor=\\"username\\" className=\\"block text-sm font-medium text-gray-700\\">\\n Username\\n </label>\\n <input\\n id=\\"username\\"\\n type=\\"text\\"\\n value={username}\\n onChange={(e) => setUsername(e.target.value)}\\n className=\\"mt-1 block w-full px-3 py-2 border border-gray-300 rounded-md shadow-sm focus:outline-none focus:ring-indigo-500 focus:border-indigo-500 sm:text-sm\\"\\n />\\n </div>\\n <div>\\n <label htmlFor=\\"password\\" className=\\"block text-sm font-medium text-gray-700\\">\\n Password\\n </label>\\n <input\\n id=\\"password\\"\\n type=\\"password\\"\\n value={password}\\n onChange={(e) => setPassword(e.target.value)}\\n className=\\"mt-1 block w-full px-3 py-2 border border-gray-300 rounded-md shadow-sm focus:outline-none focus:ring-indigo-500 focus:border-indigo-500 sm:text-sm\\"\\n />\\n </div>\\n {error && <p className=\\"text-red-500 text-sm\\">{error}</p>}\\n <button\\n type=\\"submit\\"\\n className=\\"w-full bg-blue-500 text-white py-2 rounded-md\\"\\n >\\n Login\\n </button>\\n </form>\\n );\\n}\\n\\n
Here, we’ve used Tailwind CSS for styling and added state handling with useState
to manage the form inputs. Make sure to reference your Penpot design specs for exact dimensions, colors, and typography:
Now that we have our component, it’s time to document and test it using Storybook.
\\nIn your src/components
folder, create a new file named LoginForm.stories.js
to document the login form in Storybook:
import React from \'react\';\\nimport LoginForm from \'./LoginForm\';\\nexport default {\\n title: \'Components/LoginForm\',\\n component: LoginForm,\\n};\\nexport const Default = () => <LoginForm />;\\n\\n
This will allow you to view and test the LoginForm
component alone. Storybook will automatically load and display the component.
Open the component in Storybook. Use the built-in tools to check how it looks on different screen sizes. Try resizing the preview window or switching to mobile views.
\\nTo catch accessibility issues early, install Storybook’s accessibility add-on:
\\nnpm install @storybook/addon-viewport @storybook/addon-a11y\\n\\n
Add it to your .storybook/main.js
file, along with any other tools you use.
If you set up your React project with Vite, make sure Storybook can read your project config. Edit your .storybook/main.js
file and include the Vite options, especially for paths and plugins like Tailwind:
const { mergeConfig } = require(\'vite\');\\nconst path = require(\'path\');\\nmodule.exports = {\\n framework: {\\n name: \'@storybook/react-vite\',\\n options: {},\\n },\\n stories: [\'../src/**/*.stories.@(js|jsx|ts|tsx)\'],\\n addons: [\\n \'@storybook/addon-viewport\',\\n \'@storybook/addon-a11y\',\\n \'@storybook/addon-essentials\',\\n ],\\n async viteFinal(config) {\\n return mergeConfig(config, {\\n resolve: {\\n alias: {\\n \'@\': path.resolve(__dirname, \'../src\'),\\n },\\n },\\n });\\n },\\n};\\n\\n
Don’t forget to import your Tailwind CSS in .storybook/preview.js
:
import \'../src/index.css\';\\n\\n
This setup ensures that your component stories match the look and behavior of your actual app while allowing you to catch layout and accessibility issues early in development:
\\nBefore you push the component to production, take some time to clean it up. Refactor the form so it can be reused, and remove any hard-coded values where possible. Check how the layout looks on different screen sizes and fix any issues that arise, and organize the code so that related functions and elements are grouped together. This will make the component easier to understand and reuse in other parts of your React project.
\\nIn this article, we walked through how to convert Penpot designs into functional UI components using React and Storybook. We started by setting up a clean development environment, then designed a login form in Penpot and built it as a reusable React component. From there, we documented and tested it using Storybook. This workflow creates a smooth, consistent handoff between design and development.
\\nTry incorporating this approach in your next project to streamline your UI development pipeline, especially if your form collects details like email addresses or user input across various screens.
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nI know what you’re thinking: why on Earth would you use JavaScript to develop games? I thought the same thing when I discovered that you could use JavaScript and HTML5 to develop 3D games. The truth is, since the introduction of the JavaScript WebGL API, modern browsers have intuitive capabilities that enable them to render more complex and sophisticated 2D and 3D graphics without relying on third-party plugins.
\\nYou could start your web game development journey with pure JavaScript, which is probably the best way to learn if you’re a beginner. But why reinvent the wheel when there are so many widely adopted game engines to choose from?
\\nIf you didn’t know, an HTML5 game engine is a software framework designed to help you build browser-based games with HTML5 technologies, primarily JavaScript, along with HTML and CSS. Most HTML5 game engines are built for running directly in modern web browsers without the need for plugins, This makes them ideal for creating cross-platform, mobile-friendly experiences.
\\nThis guide ranks the top 10 JavaScript/HTML5 game engines by popularity, capability, and use case. Whether you’re a solo indie developer, mobile-focused creator, rapid prototyper, or part of a small team, there’s something here for you. These engines power everything from lightweight browser games to full-featured cross-platform apps.
\\nWe’ll highlight each engine’s strengths, learning curve, and ideal scenarios—plus provide an easy-to-follow demo (web-based) to get you started.
\\nWe’ll cover:
\\nEditor’s note: This article was updated by Saleh Mubashar in May 2025 to cover four additional HTML5 game engines: GDevelop, Defold, Godot Web Build, and Construct.
\\nBefore we start, here are a few important factors to consider when selecting a JavaScript/HTML5 game engine. The right choice depends on your project type, experience level, and deployment goals:
\\nDecide whether your game will be 2D, 3D, or a mix. Some engines (like Pixi.js) are built for 2D, while others (like Three.js or Babylon.js) excel at 3D rendering.
\\nEngines like Phaser and Construct are beginner-friendly, while others may require solid JavaScript or game dev experience.
\\nSome engines come with visual editors or online sandboxes that help speed up development (e.g., Godot Web, PlayCanvas, Construct), while others are code-only.
\\nIf you’re targeting mobile browsers or devices, performance is key. While all modern engines support mobile browsers, solutions like Phaser with Cordova or Defold’s native export options provide better performance for app store distribution. With web-only engines, a lot of tedious optimization is needed, especially for lower-end devices.
\\nLicensing affects what you can build and publish. Most engines in this guide are open source (MIT, Apache), but others, like Construct, use a freemium model that may limit commercial use without a paid plan. Make sure the license works for your use case.
\\nPopular engines like Phaser, Three.js, and Babylon have large, active communities, frequent updates, extensive tutorials, GitHub repositories, and Stack Overflow threads. On the other hand, newer or more niche engines may offer unique features but might lack community support, making them harder to work with for solo developers or beginners.
\\nTo help you decide faster, here’s a side-by-side comparison of the top engines based on the criteria above:
\\nEngine | \\nType | \\nLicense / Cost | \\nMobile Support | \\nEditor Support | \\nCommunity \\n& Resources | \\nBest For | \\n
---|---|---|---|---|---|---|
Three.js | \\n3D | \\nMIT (free and open source) | \\nWeb browsers only | \\nOnline editor present on the three.js website | \\nLarge community with complete documentation and large number of examples | \\n3D rendering, WebGL projects, visualizations | \\n
Pixi.js | \\n2D | \\nMIT (free and open source) | \\nWeb browsers + native via 3rd party apps | \\nNo editor but it has a code playground on its site | \\nExtensive documentation and a large number of examples | \\nFast 2D rendering, UI-heavy games, interactive apps | \\n
Phaser | \\n2D | \\nMIT (free and open source). Phaser Editor is a paid product | \\nWeb browsers + native via 3rd party apps | \\nVisual editor available | \\nLarge and active community, excellent docs and a plugin ecosystem | \\n2D browser and mobile games with rapid development needs | \\n
Babylon.js | \\n3D | \\nApache 2.0 (free and open source) | \\nExcellent – via Web (PWA), Ionic, React Native, or Babylon Native for custom apps | \\nOnline playground for real-time coding and testing | \\nVery active forum, large dev base, backed by Microsoft and extensive tutorials | \\nHigh-end 3D browser games, mobile/desktop experiences, cross-platform | \\n
Matter.js | \\n2D (Physics) | \\nMIT (free and open source) | \\nWeb browsers only | \\nNo dedicated editor, but includes tools like Inspector and Demo GUI | \\nDecent community, large number of examples and experimental MatterTools available | \\nAdding realistic 2D physics to web games and visualizations | \\n
PlayCanvas | \\n3D | \\nMIT (engine), proprietary cloud editor; Free (public projects) | \\nMobile-first, runs in all modern browsers | \\nFull-featured online visual editor with real-time collaboration | \\nExcellent documentation and a large number of tutorials and examples | \\nBrowser-based 3D games and WebGL rendering | \\n
GDevelop | \\n2D, 3D | \\nMIT (free and open source); Paid cloud services available | \\nNative export to Android and iOS | \\nFull-featured editor (desktop, web) | \\nActive community, asset store and extensive documentation | \\n2D/3D no-code games for beginners, educators, indie devs | \\n
Defold | \\n2D, 3D | \\nSource-available (Defold License, free forever, no royalties) | \\nFull support for Android and iOS | \\nFull-featured downloadable IDE with visual editor | \\nStrong documentation, active extension portal, supported by Defold Foundation | \\nLightweight 2D and 3D games, mobile and HTML5 games, cross-platform releases | \\n
Godot | \\n2D, 3D | \\nMIT (free and open source) | \\nNative export to Android and iOS | \\nFull-featured downloadable editor | \\nVery large and growing community, excellent docs, asset store, lots of tutorials | \\nFull 2D/3D games, mobile, desktop, and even console projects | \\n
Construct | \\n2D | \\nLimited free version; paid subscription for full use | \\nFull export to Android, iOS via Cordova and HTML5 | \\nFull-featured visual editor (browser-based, works offline too) | \\nStrong and active community, official tutorials, asset store, marketplace | \\nNo-code/low-code 2D games, quick prototyping, educational uses | \\n
Three.js is one of the most popular JavaScript libraries for creating and animating 3D computer graphics in a web browser using WebGL. It’s also a great tool for creating 3D games for web browsers.
\\nBecause Three.js is based on JavaScript, it’s relatively easy to add any interactivity between 3D objects and user interfaces, such as keyboard and mouse. This makes the library perfectly suitable for making 3D games on the web.
\\nIf you’re looking to delve into creating simple or complex 3D objects on the web, Three.js is the go-to library. Its top advantages include a vast community of talented users and abundant examples and resources.
\\n\\nThree.js is the first 3D animated library I worked with, and I’d recommend it to anyone starting out with game development.
\\nLet’s create a simple rotating geometry to demonstrate what Three.js can do:
\\nimport * as THREE from \'js/three.module.js\';\\nvar camera, scene, renderer;\\nvar geometry, material, mesh;\\nanimate();\\n\\n
Create an init
function to set up everything we need to run our demo animation with Three.js:
function init() {\\n const camera = new THREE.PerspectiveCamera( 60, window.innerWidth / window.innerHeight, .01, 20 );\\n camera.position.z = 1;\\n const scene = new THREE.Scene();\\n const geometry = new THREE.BoxGeometry( 0.5, 0.5, 0.5 );\\n const material = new THREE.MeshNormalMaterial();\\n const mesh = new THREE.Mesh( geometry, material );\\n scene.add( mesh );\\n const renderer = new THREE.WebGLRenderer( { antialias: true } );\\n renderer.setSize( window.innerWidth, window.innerHeight );\\n document.body.appendChild( renderer.domElement );\\n}\\n\\n
Next, create an animate
function to animate the object with your desired motion type:
function animate() {\\n init();\\n requestAnimationFrame( animate );\\n mesh.rotation.x += .01;\\n mesh.rotation.y += .02;\\n renderer.render( scene, camera );\\n}\\n\\n
The finished result should look like this:
\\nRefer to the repo and official documentation to learn more about Three.js.
\\nIf you’re looking for a JS library to create rich and interactive 2D graphics with support for cross-platform applications, look no further than Pixi.js. This HTML5 creation engine enables you to develop animations and games without prior knowledge of the WebGL API.
\\nPixi is a strong choice in most scenarios, especially if you’re creating performance-oriented 3D interactive graphics with device compatibility in mind. Pixi’s support for Canvas fallback in cases where WebGL fails is a particularly enticing feature.
\\nLet’s build a simple demo to see Pixi.js in action. Use the following command to add Pixi.js to your project:
\\nnpm install pixi.js\\n\\n
Or CDN:
\\n<script src=\\"https://cdnjs.cloudflare.com/ajax/libs/pixi.js/5.1.3/pixi.min.js\\" ></script>\\n\\n
Create a script file and add the following code:
\\nimport * as PIXI from \'pixi.js\';\\nconst app = new PIXI.Application();\\ndocument.body.appendChild(app.view);\\napp.loader.add(\'jumper\', \'jumper.png\').load((loader, resources) => {\\n const bunny = new PIXI.Sprite(resources.bunny.texture);\\n bunny.x = app.renderer.width / 2;\\n bunny.y = app.renderer.height / 2;\\n bunny.anchor.x = .5;\\n bunny.anchor.y = .5;\\n app.stage.addChild(bunny);\\n app.ticker.add(() => {\\n bunny.rotation += .01;\\n });\\n});\\n\\n
The result should look something like this:
\\nRefer to the repo and official documentation to learn more about Pixi.js.
\\nPhaser is a cross-platform game engine that enables you to create JavaScript and HTML5-based games and compile them for many platforms. For example, you might decide to compile your game to iOS, Android, and other native apps using third-party tools.
\\nPhaser 4 is a full rewrite (smaller bundle, modern TypeScript, WebGPU focus) and is currently in beta. It is expected to be released at the end of 2025.
\\nPhaser is good for developing cross-platform game applications. Its support for a wide range of plugins and the large community of developers building games with Phaser make it easy to get started with the framework.
\\nLet’s build a basic application with Phaser. First, install Phaser as a Node module or via CDN:
\\nnpm install phaser\\n\\n
OR:
\\n<script src=\\"//cdn.jsdelivr.net/npm/[email protected]/dist/phaser.min.js\\"></script>\\n\\n
Next, pass in some default configurations to your Phaser engine:
\\nconst config = {\\n type: Phaser.AUTO,\\n width: 800,\\n height: 600,\\n physics: {\\n default: \\"arcade\\",\\n arcade: {\\n gravity: { y: 200 },\\n },\\n },\\n scene: {\\n preload: preload,\\n create: create,\\n },\\n};\\nconst game = new Phaser.Game(config);\\n\\n
Create a preload function to load in your default static files:
\\nfunction preload() {\\n this.load.setBaseURL(\\"https://labs.phaser.io\\");\\n this.load.image(\\"sky\\", \\"assets/skies/space3.png\\");\\n this.load.image(\\"plane\\", \\"assets/sprites/ww2plane.png\\");\\n this.load.image(\\"green\\", \\"assets/particles/green.png\\");\\n this.load.image(\\"astroid\\", \\"assets/games/asteroids/asteroid1.png\\");\\n this.load.image(\\"astroid2\\", \\"assets/games/asteroids/asteroid1.png\\");\\n this.load.image(\\"astroid3\\", \\"assets/games/asteroids/asteroid1.png\\");\\n}\\n\\n
Finally, define a create
function to display your newly created game:
function create() {\\n this.add.image(400, 300, \\"sky\\");\\n this.add.image(700, 300, \\"astroid\\");\\n this.add.image(100, 200, \\"astroid2\\");\\n this.add.image(400, 40, \\"astroid3\\");\\n const particles = this.add.particles(\\"green\\");\\n const emitter = particles.createEmitter({\\n speed: 100,\\n scale: { start: 1, end: 0 },\\n blendMode: \\"ADD\\",\\n });\\n const plane = this.physics.add.image(400, 100, \\"plane\\");\\n plane.setVelocity(100, 200);\\n plane.setBounce(1, 1);\\n plane.setCollideWorldBounds(true);\\n emitter.startFollow(plane);\\n}\\n\\n
Refer to the repo and official documentation to learn more about Phaser.js.
\\nBabylon.js is a powerful, simple, open game and rendering engine packed into a friendly JavaScript framework.
\\nMany large brands trust Babylon.js as their game development engine of choice. The Babylon Playground, a thriving hub of code samples, is a great tool to help you get started with the framework.
\\nBabylon and its modules are published on npm. To install it, run the following command in your command line tool:
\\nnpm install babylonjs --save\\n\\n
Alternatively, you can integrate the library into your project via CDN. To do so, create an index.html file
and add the following code:
<canvas id=\\"renderCanvas\\"></canvas>\\n<script src=\\"https://cdn.babylonjs.com/babylon.js\\"></script>\\n<script src=\\"script.js\\"></script>\\n\\n
After installation, you can start using the library by importing the global object or destructuring the scene and engine methods like so:
\\nimport * as BABYLON from \'babylonjs\'\\n // OR\\n import { Scene, Engine } from \'babylonjs\'\\nNext, create a script.js file and include the following code:\\n const { createScene } = require(\'scene.js\')\\n window.addEventListener(\'DOMContentLoaded\', function(){\\n const canvas = document.getElementById(\'renderCanvas\');\\n const engine = new BABYLON.Engine(canvas, true);\\n const scene = createScene();\\n engine.runRenderLoop(function(){\\n scene.render();\\n });\\n window.addEventListener(\'resize\', function(){\\n engine.resize();\\n });\\n });\\n\\n
Create a scene.js
file and add the following code:
export function(){\\n const scene = new BABYLON.Scene(engine);\\n const camera = new BABYLON.FreeCamera(\'camera\', new BABYLON.Vector3(0, 5,-10), scene);\\n camera.setTarget(BABYLON.Vector3.Zero());\\n camera.attachControl(canvas, false);\\n const light = new BABYLON.HemisphericLight(\'light\', new BABYLON.Vector3(0,1,0), scene);\\n const sphere = BABYLON.Mesh.CreateSphere(\'sphere\', 16, 2, scene);\\n sphere.position.y = 1;\\n const ground = BABYLON.Mesh.CreateGround(\'ground\', 6, 6, 2, scene);\\n return scene;\\n}\\n\\n
Below is a preview of what your animation should look like:
\\nRefer to the repo and official documentation to learn more about Babylon.js.
\\nMatter.js is a JavaScript 2D, rigid-body physics engine for the web. Even though it’s a JavaScript physics engine, you can combine it with various packages and plugins to create interesting web games.
\\nMatter.js is subjectively the best library for creating simple, moving animation objects. Matter.js is a physics library that focuses more on 2D objects. However, you can combine it with third-party solutions to create dynamic games.
\\nTo get started with Matter.js in a vanilla project, download the matter.js or matter.min.js package file from the official GitHub repo and add it to the HTML file with the following code:
\\n<script src=\\"matter.js\\"></script>\\n\\n
However, if you’re using a bundler, such as Parcel, you can install the package as a Node module via npm or yarn using the following commands:
\\nnpm install matter-js\\n//OR\\nyarn add matter-js\\n\\n
The following is a minimal example using the built-in renderer and runner to get you started:
\\n// module aliases\\nconst Engine = Matter.Engine;\\nconst Render = Matter.Render;\\nconst World = Matter.World;\\nconst Bodies = Matter.Bodies;\\n// create an engine\\nconst engine = Engine.create();\\n// instantiating the renderer\\nconst render = Render.create({\\n element: document.body,\\n engine: engine,\\n options: {\\n wireframes: false,\\n showAngleIndicator: false,\\n background: \\"white\\",\\n },\\n});\\n// create two boxes and a ground\\nconst boxA = Bodies.rectangle(300, 300, 70, 70);\\nconst boxB = Bodies.rectangle(400, 10, 60, 60);\\nconst ground = Bodies.rectangle(300, 510, 910, 10, { isStatic: true });\\n// add all bodies to the world\\nWorld.add(engine.world, [boxA, boxB, ground]);\\n// run the engine\\nMatter.Runner.run(engine);\\n// run the renderer\\nRender.run(render);\\n\\n
Include the above script in a page that has Matter.js installed, and then open the page in your browser. Ensure the script is at the bottom of the page (or called on the window load event or after the DOM is ready).
\\nYou should see two rectangular bodies fall and then hit each other as they land on the ground. If you don’t, check the browser console to see if there are any errors:
\\nRefer to the repo and official documentation to learn more about Matter.js.
\\nPlayCanvas uses HTML5 and WebGL to run games and other interactive 3D content in any mobile or desktop browser. Though it’s free and open sourced, PlayCanvas focuses more on the game engine than the rendering engine. Therefore, it’s more suitable for creating
\\n3D games that use WebGL and HTML5 Canvas.
PlayCanvas is great for creating small public projects or school projects — at least, that’s what I’ve used it for. If you need more features and more control over your game development, you might want to consider subscribing for premium features.
\\nFor now, let’s do some basic rendering with the engine. As a first step, download the package file from the GitHub repository and add it to your project using the following code:
\\n<script src=\'https://code.playcanvas.com/playcanvas-stable.min.js\'>\\n\\n
Next, create a script.js
file and link it to the HTML file using the following code:
<canvas id=\'canvas\'></canvas>\\n<script src=\'/script.js\'>\\n\\n
Now, open the script.js
file and add the following code to instantiate a new PlayCanvas application:
const canvas = document.getElementById(\'canvas\');\\n const app = new pc.Application(canvas);\\n window.addEventListener(\'resize\', () => app.resizeCanvas());\\n const box = new pc.Entity(\'cube\');\\n box.addComponent(\'model\', {\\n type: \'box\'\\n });\\n app.root.addChild(box);\\n\\n
To create the camera and light for the object, add the following code:
\\nconst camera = new pc.Entity(\'camera\');\\n camera.addComponent(\'camera\', {\\n clearColor: new pc.Color(.1, .1, .1)\\n });\\n app.root.addChild(camera);\\n camera.setPosition(0, 0, 3);\\n const light = new pc.Entity(\'light\');\\n light.addComponent(\'light\');\\n app.root.addChild(light);\\n light.setEulerAngles(46, 0, 0);\\n app.on(\'update\', dt => box.rotate(10 * dt, 20 * dt, 30 * dt));\\n app.start();\\n\\n
The code above should produce the following result:
\\nRefer to the repo and official documentation to learn more about PlayCanvas.
\\nGDevelop is a free, open-source no-code game engine ideal for creating 2D and lightweight 3D games. It features a powerful visual editor, an active community, detailed documentation, and an extensive asset store. With export options for Android, iOS, desktop, and the web, you can deploy your game almost anywhere.
\\nTo get started with GDevelop, download it from gdevelop.io/download — it’s available for Windows, macOS, Linux, Android, iOS, and even runs in the browser. Installation steps vary slightly by platform but are beginner-friendly. Once installed, you can either customize a game template or start a new project from scratch:
\\nRefer to the repo and official documentation to learn more about GDevelop.
\\nDefold is a source-available, cross-platform game engine ideal for creating lightweight 2D and 3D games. It comes with a full-featured IDE that includes both a visual editor and a Lua-based code editor, offering flexibility for both no-code and code-first developers. Similar to GDevelop, it has cross-platform support for all major web, desktop, and mobile platforms.
\\nTo get started, download Defold from defold.com/download. It runs on Windows, macOS, and Linux. There’s nothing else to install. Just open the editor, start a new project or use a template, and you’re ready to go:
\\nRefer to the repo and official documentation to learn more about Defold.
\\nGodot is a powerful, open-source game engine designed for both 2D and 3D game development. It supports GDScript (its own Python-like language), C#, and C++, making it accessible for beginners and flexible for experienced developers. Godot’s node-based architecture encourages a modular approach to game development. It is one of the most popular game engines out there, comparable to Unreal and Unity in terms of popularity.
\\nTo get started, download Godot from defold.com/download — it runs on Windows, macOS, Linux, and Android. You can also work on the online editor:
\\nRefer to the repo and official documentation to learn more about Godot.
\\nConstruct is a browser-based, no-code/low-code 2D game engine designed for rapid prototyping and educational use. It has a fully visual editor, eliminating the need for any coding. This makes it excellent for beginners. Construct runs entirely in the browser, works offline, and offers export options for HTML5, Android, iOS, desktop, and more.
\\nRefer to the official documentation to learn more about Construct.
\\nBy breaking down the pros, cons, and use cases associated with each game engine listed above, I hope you gained some insight into which one best suits the type of game or animation you want to create.
\\nIf you’re primarily looking for a powerful rendering tool for the web, Three.js is a top choice. But if you want an all-in-one game engine with a visual editor, tools like Godot, Defold, or GDevelop offer an excellent balance between usability and capability for both 2D and 3D development.
\\nEach engine has its own strengths — the best one ultimately depends on your experience level, goals, and the type of game you want to build.
\\nWhat game engine do you use in your game development projects? Let us know in the comments!
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nOne of the most common challenges when building AI-powered applications is providing them context. Without context, even the most sophisticated models are just glorified chatbots, talking but unable to take meaningful action.
\\nTake, for example, a bookstore application that uses an AI agent to help users make purchase decisions. The agent might receive a query like:
\\n\\n\\n“I need a list of the highest-rated books on Goodreads that I can purchase from this website.”
\\n
On its own, the AI model can’t fulfill this request because it lacks the necessary context. It would need access to the store’s product database to check availability and also be able to query Goodreads, likely through a third-party API, to retrieve book ratings. Only when it has access to both sources can it generate a useful and accurate response.
\\nBefore now, developers had to manually integrate AI models with various external datasets, a process that was often tedious and lacked standardization. To solve this, Anthropic, the team behind Claude, introduced the Model Context Protocol (MCP) in 2024. This open source protocol provides a universal, standardized way for AI models to access contextual information from diverse data sources and tools.
\\nIn this article, we’ll explore what MCP is, break down its core components, explain how they work together, and walk through a hands-on implementation.
\\nMCP is an open standard that allows AI models to securely interact with local and remote tools through standardized server implementations, whether it’s querying a database or running a command. This lets models go beyond their training data, making them flexible and aware of the world around them.
\\nTo clarify, let’s revisit the example from the previous section. If you build an AI tool that fetches a list of available books from the store’s database and provides that information to the AI agent, you effectively solve the context problem for the agent.
\\nHowever, integrating Goodreads review scores would require extra effort. It would require calling an external API to retrieve review data. This process can quickly become tedious, especially if you need to work with multiple APIs.
\\nNow, imagine if Goodreads offered a tool similar to the bookstore, one that returns book review scores via an MCP server. Goodreads could expose its tool through this server and allow any compatible LLM to discover and access it remotely.
\\nWith this standardization, MCP fosters an interoperable ecosystem by providing a unified protocol for all models, where developers can build AI tools once and make them accessible to a wide range of hosts and services through MCP servers.
\\nAs of this writing, there are already several host applications that support MCP, including Claude Desktop, Claude Code, and IDEs like Cursor and Windsurf. You can find a curated list of host applications on the Awesome GitHub repo and a collection of MCP client and server tools on the PulseMCP website.
\\nMCP follows a client-server architecture similar to the Language Server Protocol (LSP), which helps different programming languages connect with a wide range of dev tools. However, in this case, it helps host applications to connect with a wide range of AI tools.
\\n\\nThe MCP architecture is made up of four core components:
\\nMCP follows a standardized communication process that can be broken down into four well-defined flows when a host application that supports MCP is queried by a user:
\\nAs you may have gathered from the previous section, MCP is essentially an API that uses a two-way connection protocol between the client and the server.
\\nHowever, unlike traditional APIs that rely on various HTTP request methods sent via URLs, the MCP protocol is based on three essential primitives that an MCP server can expose to an LLM:
\\nThe tools and resources are the most used primitives because their primary use is to enrich the context available to the LLM.
\\nThe quickest way to get started with MCP, as recommended by Anthropic, is to use the Claude Desktop integration along with one of the pre-built servers for popular systems, such as Google Drive, Slack, Git, and various databases, open sourced by Anthropic. You can also use servers from curated community directories or repositories, like the ones highlighted in previous sections.
\\nThe integration process typically involves configuring the AI application (in this case, Claude Desktop) using a configuration schema like the following:
\\n{\\n \\"mcpServers\\": {\\n \\"brave\\": {\\n \\"command\\": \\"npx\\",\\n \\"args\\": [\\n \\"-y\\",\\n \\"@modelcontextprotocol/server-brave\\",\\n \\"--env\\",\\n \\"BRAVE_API_KEY=API_KEY\\"\\n ]\\n }\\n }\\n }\\n\\n
This is a configuration schema for the Brave MCP server. The schema essentially tells the host how to run the MCP server using the following instructions:
\\nmcpServer
: The mcpServer
object holds definitions for one or more MCP server configurations, in this case, for Bravecommand
: Specifies the executable command to run. Here, it uses npx
, Node’s package execute toolarg
: Contains the command-line arguments that will be passed to the npx
commandIn summary, when this configuration schema is activated by a host, it will execute the following command:
\\nnpx -y @modelcontextprotocol/server-brave --env BRAVE_API_KEY=API_KEY\\n\\n
This tells the host to install and run the Brave MCP server locally using the STDIO transport layer. To access the Claude Desktop config file, navigate to Settings → Developer:
\\nClicking on the Edit Config button will open Claude’s claude_desktop_config.json
file in your file explorer. You can then edit the file using any text or code editor:
As mentioned earlier, there are community directories that curate lists of MCP servers from various publishers. Platforms like Smithery and mcp.so not only list these servers, but also make integrating them much easier:
\\nFor example, if you try to get the JSON config for the Brave Search MCP server from Smithery, you’ll first need to select your preferred AI agent, such as Claude, Cursor, or Windsurf, and provide an API key for the MCP server if required. Once done, the platform will give you the option to either install the package via an npm
command or via the JSON config:
However, to provide a more in-depth overview of how MCP works and to highlight its use case, we’ll take a deeper dive by building our own MCP server from scratch.
\\nWe’ll build a conceptual storefront application, similar to the example used in previous sections, but this time, it will be for a pizza business. This storefront web app will feature a custom AI agent capable of executing tools to help users make purchase decisions based on available items in the database.
\\nNext, we’ll make this tool publicly accessible by creating an MCP server for it. This way, other AI agents can interact with our application’s context and perform actions like fetching available items from the database.
\\nThis hands-on approach will give you a real-time understanding of how MCP works.
\\nTo keep things simple and straight to the point, I’ve already built the application using Next.js. You can find the codebase in my GitHub repository. Our focus will be on creating a tool for the AI and exposing it via an MCP server. To follow along, make sure you meet the following prerequisites:
\\nAfter cloning the repo, the first step is to navigate to the project’s directory, open the .env
file, and add your DeepSeek API key to the DEEPSEEK_API_KEY
environment variable, along with a system command for the AI agent in the AI_SYSTEM_COMMAND
variable.
Next, open your terminal and run the following commands to install the dependencies and start the development server:
\\nnpm install\\nnpm run dev\\n\\n
Once the development server is running, open your browser and navigate to localhost:3000
. You should see a website similar to the one shown below:
Right now, our store’s AI agent is fully functional and will provide responses when you interact with it. However, if we ask it to give us a list of available items in the store, it will respond with a made-up list:
\\nAs you can see, the agent is just hallucinating. This proves that the AI agent lacks awareness of its environment and requires additional context to function effectively. What we actually want is for the AI agent to have access to our pizza API.
\\nThere’s a lot going on in the project, but for our purposes, we’re only interested in the /chat
API route and the ai-chat.tsx
component.
The /chat
route is responsible for querying the DeepSeek API using the streamText
function from Vercel’s AI SDK to fetch the LLM’s response:
// App/api/chat/route.ts\\n\\nimport { streamText } from \\"ai\\";\\nimport { deepseek } from \\"@ai-sdk/deepseek\\";\\nimport getTools from \\"@/utils/ai-tools\\";\\n\\nconst model = deepseek(\\"deepseek-chat\\");\\n\\nexport async function POST(req: Request) {\\n try {\\n const { messages } = await req.json();\\n const tools = await getTools();\\n\\n const systemPrompt = process.env.AI_SYSTEM_COMMAND;\\n if (!systemPrompt) {\\n throw new Error(\\"AI_SYSTEM_COMMAND environment variable not set.\\");\\n }\\n\\n const result = streamText({\\n model,\\n system: systemPrompt,\\n messages,\\n tools\\n });\\n return result.toDataStreamResponse({\\n sendReasoning: true,\\n });\\n } catch (error) {\\n console.error(`Chat Error: ${error}`);\\n return new Response(JSON.stringify({ error: \\"Internal Server Error\\" }), {\\n status: 500,\\n headers: { \\"Content-Type\\": \\"application/json\\" },\\n });\\n }\\n}\\n\\n
The ai-chat.tsx
component, on the other hand, handles streaming both the user’s input and the AI’s response to the UI using the SDK’s useChat()
hook. Here’s the relevant part of the component:
// components/ai-chat.tsx\\n\\nconst { messages, input, handleSubmit, handleInputChange, status } = useChat({\\n initialMessages: [\\n {\\n id: \\"xxx\\",\\n role: \\"assistant\\",\\n content:\\n \\"🍕 Hey there! \\\\n I’m Pizzaria’s AI helper—here to answer menu questions, check deals, or help you order. Craving something specific? Just ask!\\",\\n },\\n ],\\n });\\n\\n
To provide our application’s context to the AI agent, we need to create a tool that grants it access to our app’s database. This will allow the agent to generate responses based on the context we’ve provided.
\\nWe’ll do this using the tool
function from Vercel’s AI SDK. If you cloned the repo I shared earlier, you don’t need to install anything, as it’s already set up. However, if you’re working with your own app, you can install the SDK and the DeepSeek provider using the following commands:
npm i ai @ai-sdk/deepseek\\n\\n
Note: To follow along with this tutorial, you’ll need to use the Vercel AI SDK for querying DeepSeek on the backend. Although OpenAI’s SDK also offers tool calling, the process differs significantly from Vercel AI’s, so keep this in mind.
\\nAfter installing the packages, navigate to the project directory and create a new folder src
→ utils
→ ai-tools
. Then, add the following code:
// utils/ai-tools.ts\\n\\nimport { pizzas } from \\"@/data/pizzas\\";\\nimport { tool } from \\"ai\\";\\nimport { z } from \\"zod\\";\\n\\nconst pizzaTool = tool({\\n description: \\"Get all pizzas from the database\\",\\n parameters: z.object({\\n message: z\\n .string()\\n .describe(\\"The message to get the get all pizza based on\\"),\\n }),\\n execute: async () => {\\n return pizzas;\\n },\\n});\\n\\nexport default async function getTools() {\\n return {\\n pizzaTool,\\n };\\n}\\n\\n
This code is pretty straightforward. We instantiate a tool instance using the tool method from the AI library. Then, we provide a description
string that explains the tool’s purpose, parameters
to define the expected input for the tool, and an execute
function that returns the pizza
data.
Another important thing to note here is that we’re using Zod to validate the structure of the data being passed to the parameters. This helps prevent the LLM from hallucinating random information.
\\nNext, navigate back to the /chat
route at api
→ chat
and import the tool we just created. Destructure pizzatool
from the getTools
function and add it to the streamText
object as follows:
// App/api/chat/route.ts\\n\\nconst { pizzariaTool } = await getTools();\\n\\nexport async function POST(req: NextRequest) {\\n const { messages } = await req.json();\\n\\n const result = streamText({\\n model,\\n system: process.env.AI_SYSTEM_COMMAND,\\n messages,\\n tools: {\\n pizzaria: pizzariaTool,\\n },\\n });\\n\\n return result.toDataStreamResponse({\\n sendReasoning: true,\\n });\\n}\\n\\n
Now, if you go back to the AI agent on the website and ask it a question like \'What\'s on the menu?\'
, it should provide a proper response with the correct items from the database:
Right now, only our AI agent has access to this tool. We can create an identical tool using an MCP server and expose the site’s context to third-party LLMs and IDEs.
\\nTo create an MCP server, we need to set up a new Node server project entirely separate from our main app. Since we’re using STDIO as the transport mechanism, we don’t need to create the server using the conventional process of setting up an HTTP instance.
\\nThe protocol provides an official SDK for building MCP servers in various languages, including TypeScript, Java, C#, and Python. For this tutorial, we’ll use the TypeScript SDK. You can find the package in the official GitHub repository or simply install it using the following command after setting up your Node project:
\\nnpm install @modelcontextprotocol/sdk\\n\\n
Next, add the following code to your main server file (server.ts
, index.ts
, or any other name you’ve chosen):
import { McpServer } from \\"@modelcontextprotocol/sdk/server/mcp.js\\";\\nimport { StdioServerTransport } from \\"@modelcontextprotocol/sdk/server/stdio.js\\";\\n\\nconst server = new McpServer({\\n name: \\"pizzaria\\",\\n version: \\"1.0.0\\",\\n});\\n\\n// Helper for getting pizzas\\nasync function fetchPizzas() {\\n try {\\n const res = await fetch(\\"http://localhost:3000/api/pizzas\\");\\n\\n if (!res.ok) {\\n const errorText = await res.text();\\n console.error(\\n `API request failed with status ${res.status}: ${errorText}`\\n );\\n }\\n\\n const inventory = await res.json();\\n\\n return inventory;\\n } catch (error) {\\n console.error(`An unexpected error occurred: ${error}`);\\n }\\n}\\n\\nserver.tool(\\"getPizzas\\", \\"Get the list of pizza in the database\\", async () => {\\n const result = await fetchPizzasLogic();\\n\\n return { content: [{ type: \\"text\\", text: JSON.stringify(result) }] };\\n});\\n\\n// Setup and connect MCP server using Standard I/O\\nasync function startMcpServer() {\\n const transport = new StdioServerTransport();\\n try {\\n await server.connect(transport);\\n console.error(\\"MCP server connected via stdio and ready for requests.\\");\\n } catch (error) {\\n console.error(\\"Failed to connect MCP server:\\", error);\\n process.exit(1);\\n }\\n}\\n\\n// Start the server\\nstartMcpServer();\\n\\n
In the first part of this code, we use the McpServer
method to initialize an MCP server and then use its instance to register a tool. We’re also using the fetchPizzas()
helper function to retrieve pizza data from our other server and return it from the tool:
const server = new McpServer({\\n name: \\"pizzaria\\",\\n version: \\"1.0.0\\",\\n});\\n\\n// fetchPizzas Helper function is here…\\n\\nserver.tool(\\"getPizzas\\", \\"Get the list of pizza in the database\\", async () => {\\n const result = await fetchPizzasLogic();\\n\\n return { content: [{ type: \\"text\\", text: JSON.stringify(result) }] };\\n});\\n\\n
As you can see, the tool method, similar to the resource method, typically accepts three parameters: the name of the resource, a parameter (which, in the case of resource, is usually a URI), and a callback function used to fetch the data.
\\nIn this example, we’ve set the name to getPizzas
, passed an instruction as the parameter to retrieve the list of pizzas from the database, and used the execution function to handle the data fetching.
In the second half, we instantiate the STDIO transport layer and bind it to our MCP server instance. Finally, we start the server to begin listening for incoming requests:
\\nasync function startMcpServer() {\\n const transport = new StdioServerTransport();\\n\\n try {\\n await server.connect(transport);\\n console.error(\\"MCP server connected via stdio and ready for requests.\\");\\n } catch (error) {\\n console.error(\\"Failed to connect MCP server:\\", error);\\n process.exit(1);\\n }\\n}\\n\\nstartMcpServer();\\n\\n
You can start the MCP server using the good ol’ node server.ts
command, and the server should spin up without issues. Currently, there’s no visible indicator confirming that the server is working correctly. While the MCP team does provide a tool for testing MCP servers, we’ll skip that for now and connect it directly to Claude Desktop instead.
Now that our server resource is exposed, we can connect it to any supported host, such as Claude, ChatGPT, or AI-powered IDEs. For this tutorial, we’ll stick with Claude since we’re already familiar with its setup.
\\nTo do this, all we have to do is go to Claude’s claude_desktop_config.json
config file and add a connection schema for our MCP server:
{\\n \\"mcpServers\\": {\\n \\"pizzaria\\": {\\n \\"command\\": \\"node\\",\\n \\"args\\": [\\"C:/Users/dave/OneDrive/Desktop/MCP_server/server.ts\\"]\\n }\\n }\\n}\\n\\n
Since we’re serving locally, all we have to provide is the path to our MCP server and the node
command to run it.
Now, if you restart Claude, you should see a tool and MCP attachment icon, indicating that our MCP server has been successfully installed and connected to Claude Desktop’s MCP client:
\\nIf we ask Claude Desktop pizza-related questions, it will call the getPizzas
tool and use the returned results to form a response:
This use case might seem like overkill, considering our MCP server only needs to access the database. However, the goal here is to demonstrate how to build an MCP server for a small service entirely from scratch.
\\nLike any emerging technology, MCP brings its own set of complexities and challenges that developers and organizations should consider before adopting it at scale:
\\nAI agents often struggle with tool selection and execution. MCP addresses this by allowing structured tool descriptions and specifications, enabling agents to better interpret and use them. However, the effectiveness of this approach still heavily depends on the clarity and quality of these descriptions, as well as the agent’s ability to interpret them correctly.
\\nBest practice: Write clear, concise, and comprehensive tool descriptions. Explain not just what the tool does but when to use it, including parameter-by-parameter documentation to guide the AI effectively.
\\nTools with broad functionalities can create usability and maintenance issues. AI agents may struggle to choose or execute these tools correctly due to overlapping functionalities, and such tools often require frequent updates.
\\nBest practice: Design tools with specific, well-defined purposes. Break complex logic into smaller, single-responsibility tools, minimize the number of parameters, and define data types wherever applicable.
\\nMCP is still a relatively new protocol and is evolving rapidly. This means the ecosystem is subject to frequent (and potentially breaking) changes. While the core concepts behind MCP are stable, version updates for servers and clients may introduce overhead for maintenance.
\\nBest practice: Anticipate breaking changes. Stay updated with the latest specifications and release notes. Consider version locking and semantic versioning in production environments to minimize disruption.
\\nCurrently, MCP only has first-class support within the Anthropic ecosystem (i.e., Claude). While OpenAI has extended its agent SDK to support MCP, widespread adoption is still uncertain. Many other AI platforms do not yet natively support MCP, which may require additional workarounds, such as custom adapters or integrations.
\\nBest practice: Evaluate the MCP compatibility of your chosen AI providers. If you rely on multiple AI systems, consider designing abstraction layers or fallback mechanisms to maintain flexibility.
\\nMCP is expected to revolutionize the AI landscape, much like how mobile applications spurred the boom of smartphone devices. With MCPs, developers can build truly connected ecosystems of AI models and intelligent user experiences.
\\nAs a challenge, create an addToCart
tool that can query the /cart
route and add pizzas to the cart upon request. Happy coding!
Hey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\n?.
)\\n The React compiler is one of the biggest updates to the React framework in years. Last year, the React team released a beta version. Around that time, I wrote an article called “Exploring React’s Compiler: A Detailed Introduction”. It unpacked the core idea behind the compiler and showed how it could improve React development by automatically handling performance issues behind the scenes.
\\nWhen the beta dropped, the React team encouraged developers to try it out, give feedback, and contribute. And many did. A standout example is Sanity Studio. They used the compiler internally and shipped libraries like react-rx
and @sanity/ui
that are optimized for it.
Thanks to contributions like these, the React team officially released the first Release Candidate (RC) on April 21. According to the release blog, RC is meant to be stable and production-ready.
\\nIn this article, we’ll go over what’s new in RC and what it means for you as a React developer.
\\nWe’ll get into the nitty gritty below, but here’s a quick summary of what the RC promises:
\\nNow let’s take a step back and dive a little deeper into the thought process behind the React Compiler.
\\nTo recap, the React Compiler is a build-time tool that automatically optimizes your React apps through memoization. Simply put, the compiler analyzes your React code during the build process and strategically inserts memoization wherever it sees potential performance gains. This eliminates the tedious and often confusing process of manual optimization using tools like React.memo()
, useMemo()
, and useCallback()
.
The release of RC is a big milestone. As I mentioned in the previous article, the compiler has been in development for a while. It was originally called “React Forget,” but it’s come a long way since then.
\\nThe beta version gave us an early look and drew a lot of useful feedback. Case studies from teams like Sanity Studio and Wakelet helped shape the direction. Now with the Release Candidate, the compiler is moving toward full stability. The React team believes it’s ready for real-world use.
\\nThe typical progression from beta to release candidate is solidifying the existing functionality, improving stability, and making the developer experience smoother in preparation for the final stable release. RC does all that, but it also includes a few new updates. Here’s what stands out:
\\nIn the beta release, the compiler supported several build tools like Vite, Babel, and Rsbuild, but not SWC, a Rust-based tool used in frameworks like Next.js and Parcel. With RC update, the compiler now supports SWC as an SWC plugin.
\\nThis integration is still a work in progress, but it’s promising. It brings better Next.js build performance by using a custom optimization in SWC that only applies the compiler to relevant files, i.e., files containing JSX or React Hooks, instead of compiling everything. This helps keep build times fast and minimize performance costs.
\\nThe process for enabling the compiler in Next.js remains the same as outlined in the previous article, so you can refer to it if you’d like to use it in your project. However, since SWC support is still experimental, things may not work as expected. The React team recommends using Next.js v15.3.1 and above for optimal build performance.
\\nAnother notable update is the migration of eslint-plugin-react-compiler
into eslint-plugin-react-hooks
. As mentioned before, the compiler relies on strict adherence to React’s rules. That’s what the dedicated compiler plugin helped enforce.
With RC, you no longer need that separate package. It’s now part of the main ESLint plugin for React. If you’ve been using eslint-plugin-react-compiler
, you can uninstall it and switch to eslint-plugin-react-hooks
:
npm install --save-dev [email protected]\\n\\n
Then enable the compiler rule with your Eslint config file (for flat config or legacy config):
\\n// eslint.config.js\\nimport * as reactHooks from \'eslint-plugin-react-hooks\';\\n\\nexport default [\\n // Flat Config (eslint 9+)\\n reactHooks.configs.recommended,\\n\\n // Legacy Config\\n reactHooks.configs[\'recommended-latest\'],\\n {\\n rules: {\\n \'react-hooks/react-compiler\': \'error\', \\n },\\n },\\n];\\n\\n
Unlike the dedicated ESLint plugin for the compiler, the new rule in eslint-plugin-react-hooks
doesn’t require the compiler to be installed. So there’s no risk in upgrading to it, even if you haven’t adopted the compiler yet.
N.B. To enable the rule, add \'react-hooks/react-compiler\': \'error\'
to your ESLint config, as shown in the example above.
In the beta version, the compiler was only compatible with React 19, and the fastest way to use it was within a Next.js project. However, with the RC release, the compiler now supports backward compatibility.
\\nIt still works best with React 19, but if you’re not ready to upgrade, you can install a separate react-compiler-runtime
package to run compiled code on older versions (just not earlier than React 17).
For Vite users, the setup is mostly the same, using the babel-plugin-react-compiler
package:
npm install --save-dev --save-exact babel-plugin-react-compiler@rc\\n\\n
And add it as a Babel plugin:
\\n=// babel.config.js\\nconst ReactCompilerConfig = { /* ... */ };\\n\\nmodule.exports = function () {\\n return {\\n plugins: [\\n [\'babel-plugin-react-compiler\', ReactCompilerConfig], // must run first!\\n // ...\\n ],\\n };\\n};\\n\\n\\n// vite.config.js\\nconst ReactCompilerConfig = { /* ... */ };\\n\\nexport default defineConfig(() => {\\n return {\\n plugins: [\\n react({\\n babel: {\\n plugins: [\\n [\\"babel-plugin-react-compiler\\", ReactCompilerConfig],\\n ],\\n },\\n }),\\n ],\\n // ...\\n };\\n});\\n\\n
But if you’re using an older React version, like 18, the process includes one extra step, as mentioned earlier. You’ll need to install the react-compiler-runtime
package:
npm install react-compiler-runtime@rc\\n\\n
After that, update your babel.config.js
or vite.config.js
file to explicitly target the React version you’re using:
const ReactCompilerConfig = {\\n target: \'18\' // \'17\' | \'18\' | \'19\'\\n };\\n\\n module.exports = function () {\\n return {\\n plugins: [\\n [\'babel-plugin-react-compiler\', ReactCompilerConfig],\\n ],\\n };\\n };\\n\\n
The React team is also collaborating with the oxc team to eventually add native support for the compiler. Once Rolldown, a Rust-based bundler for JavaScript and TypeScript, is released and supported in Vite, developers should be able to integrate the compiler without relying on Babel.
\\nThe compiler’s ability to automatically track dependencies is already a game-changer, as it reduces the need for manual specifications in optimization Hooks like useEffect
, useMemo
, and useCallback
. RC builds on this by improving how it handles more complex JavaScript patterns, like optional chaining and array indices.
?.
)Previously, when dealing with nested objects and potential null
or undefined
values, you might have written code like this with manual dependency arrays:
import React, { useState, useEffect } from \'react\';\\n\\nfunction UserProfile({ user }) {\\n const [displayName, setDisplayName] = useState(\'\');\\n\\n useEffect(() => {\\n if (user && user.profile && user.profile.name) {\\n setDisplayName(user.profile.name);\\n } else {\\n setDisplayName(\'Guest\');\\n }\\n }, [user && user.profile && user.profile.name]);\\n // ...\\n}\\n\\n
With the RC, the compiler can now intelligently track the dependency on the nested property even with the optional chaining:
\\nimport React, { useState, useEffect } from \'react\';\\n\\nfunction UserProfile({ user }) {\\n const [displayName, setDisplayName] = useState(\'\');\\n\\n useEffect(() => {\\n setDisplayName(user?.profile?.name || \'Guest\');\\n }, [user?.profile?.name]); // Compiler can understand this dependency now\\n // ...\\n}\\n\\n
The compiler now understands expressions like user?.profile?.name
. It knows to re-run the effect if user.profile.name
changes, even if user
or user.profile
starts as null
or undefined
and then gets a value later. There’s no need to manually track each piece.
Similarly, when an effect or memoized value depends on a specific element within an array, you previously had to explicitly include that element in the dependency array:
\\nimport React, { useState, useEffect } from \'react\';\\n\\nfunction ItemList({ items }) {\\n const [firstItemName, setFirstItemName] = useState(\'\');\\n\\n useEffect(() => {\\n if (items && items[0]) {\\n setFirstItemName(items[0].name);\\n } else {\\n setFirstItemName(\'\');\\n }\\n }, [items && items[0] && items[0].name]); // Manual dependency on the first item\'s name\\n // ...\\n}\\n\\n
Now, the compiler can understand the dependency on a specific array index:
\\nimport React, { useState, useEffect } from \'react\';\\n\\nfunction ItemList({ items }) {\\n const [firstItemName, setFirstItemName] = useState(\'\');\\n\\n useEffect(() => {\\n setFirstItemName(items?.[0]?.name || \'\');\\n }, [items?.[0]?.name]); // Compiler understands the dependency on the first item\'s name\\n // ...\\n}\\n\\n
Here, the compiler will correctly identify that the useEffect
depends on items[0].name
, it knows to re-run if either items
changes or the name
of the first item changes. This takes a lot of the guesswork and boilerplate out of writing stable, performant Hooks.
As a frontend developer, you might be wondering how this update impacts your day-to-day work, both the good and the bad.
\\nThe first thing you need to know is that RC isn’t just a performance update optimization under the hood; it has a substantial impact on how frontend developers write, maintain, and even think about their React applications.
\\nOne of the immediate impacts of this update is the potential for significantly reduced boilerplate code. With improved dependency inference, the compiler’s ability to track dependencies means developers can write more concise and readable code.
\\n\\nTake this typical pre-compiler scenario:
\\nimport React, { useState, useEffect, useMemo } from \'react\';\\n\\nfunction ProductDetails({ product }) {\\n const discountedPrice = useMemo(() => {\\n return product && product.price * (1 - (product.discount || 0));\\n }, [product && product.price, product && product.discount]);\\n\\n const [formattedPrice, setFormattedPrice] = useState(\'\');\\n useEffect(() => {\\n if (discountedPrice !== undefined) {\\n setFormattedPrice(`$${discountedPrice.toFixed(2)}`);\\n }\\n }, [discountedPrice]);\\n\\n return (\\n <div>\\n {product && <h1>{product.name}</h1>}\\n <p>Price: {product && `$${product.price}`}</p>\\n {discountedPrice !== undefined && <p>Discounted Price: {formattedPrice}</p>}\\n </div>\\n );\\n}\\n\\n
If product
and its properties rarely change, the compiler can now automatically memoize the component and optimize effects based on how the data actually flows. That means you can safely skip the manual use of useMemo
or detailed dependency arrays, if the compiler determines it’s safe to do so:
import React, { useState, useEffect } from \'react\';\\n\\nfunction ProductDetails({ product }) {\\n // Compiler can likely optimize this component based on data flow\\n const discountedPrice = product?.price * (1 - (product?.discount || 0));\\n const [formattedPrice, setFormattedPrice] = useState(\'\');\\n\\n useEffect(() => {\\n setFormattedPrice(`$${discountedPrice?.toFixed(2)}`);\\n }, [discountedPrice]); // Compiler understands dependency on discountedPrice\\n\\n return (\\n <div>\\n <h1>{product?.name}</h1>\\n <p>Price: ${product?.price}</p>\\n {discountedPrice !== undefined && <p>Discounted Price: {formattedPrice}</p>}\\n </div>\\n );\\n}\\n\\n
The result? Cleaner components that are easier to read and maintain.
\\nThe compiler as a whole will eventually change how we think about performance. Instead of proactively identifying bottlenecks and manually memoizing components and values, you can just focus on writing clear, functional components that follow React’s best practices.
\\nThat doesn’t mean you can forget about performance entirely. But it does give you a more “performance-by-default” foundation, something other frameworks often boast of. As long as you follow the rules of React, the compiler takes care of the rest.
\\nAt first glance, the compiler seems like it should “just work,” and in most cases, it does. But someone on X (formerly Twitter) raised a fair point: how do you know what is being memorized?:
\\nThat caught my attention. The whole point of the compiler is that you shouldn’t have to worry about that. But this user went on to describe edge cases where automatic memoization might not apply. So, how do you tell when it’s working and when it’s not?
\\nAlthough I covered how to do this in the previous article, I guess it could also be considered a learning curve. Not a steep one, but enough that every developer will need to build some understanding of how the compiler works, what its limitations are, and how to reason about it when debugging performance issues.
\\nOne potential drawback of the compiler is the slow adoption rate among popular libraries. A good example is React Hook Form, which can run into issues when the compiler is enabled. This was pointed out by a Reddit user who’s been using the compiler in production on some projects.
\\nIt’s not too surprising, since many libraries occasionally bend React’s rules, so some won’t work with the compiler right away. And since the compiler isn’t fully stable yet, most libraries haven’t had time to officially adopt it.
\\nFor developers considering the Compiler at this time, this is an important factor to keep in mind.
\\nThe release of the RC update is a big step forward, not just for the compiler itself, but for how we build and optimize React apps. With smarter dependency inference, simpler ESLint integration, and early support for tools like SWC, it’s clear the React team is serious about making performance a built-in feature, not an afterthought.
\\nFor developers, this means less time fine-tuning and more time focusing on building great user experiences.
\\nThat said, the compiler is still evolving. If you get a chance to try it in your projects, your feedback will go a long way in shaping where it goes next.
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nAccess control is crucial for securing web applications and ensuring a smooth user experience. It answers the question: What are you, as the user, allowed to do?
\\nOn the frontend, access control isn’t about security. In fact, instead of enforcing security or permission checks, which don’t scale well, they are used for a different purpose: improving user experience. They help guide users by only displaying UI elements they can actually interact with.
\\nThis raises an important question: Which access control model is best for frontend applications? For most web applications, RBAC is the go-to. It integrates well with frameworks like React and Next.js, allowing you to have a smooth performance while scaling easily with growing teams. But what about other models like ABAC, ACL, or PBAC? Can they also work for frontend applications?
\\nThis article explores how each model works, their suitability for frontend access control, and why RBAC might be the best choice for most cases.
\\nRBAC, or role-based access control, is a model that restricts access to resources and data based on a user’s role. On the frontend, RBAC can be used to control what parts of the UI a user can view or interact with. Instead of assigning permissions to users, permissions are put into roles so that access management becomes easier as the application scales:
\\nRBAC operates based on three principles:
\\nRBAC lets your UI adapt dynamically based on the user’s role, and it does this by hiding or disabling elements that users don’t have access to. For example:
\\nYou can also represent this logic in a code setting like so:
\\nimport React, { useState } from \\"react\\";\\n\\nconst roles = {\\n user: [\\"viewProfile\\"],\\n manager: [\\"viewProfile\\", \\"viewReports\\", \\"approveRequests\\"],\\n admin: [\\"viewProfile\\", \\"viewReports\\", \\"approveRequests\\", \\"manageSystem\\"],\\n};\\n\\nconst hasPermission = (role, permission) => roles[role]?.includes(permission);\\n\\nconst Dashboard = ({ role }) => (\\n <div>\\n <h1>Dashboard</h1>\\n {hasPermission(role, \\"viewProfile\\") && <p> Profile Section</p>}\\n {hasPermission(role, \\"viewReports\\") && <p> Team Reports</p>}\\n {hasPermission(role, \\"approveRequests\\") && <p>Approval Requests</p>}\\n {hasPermission(role, \\"manageSystem\\") && <p>System Management</p>}\\n </div>\\n);\\n\\nconst App = () => {\\n const [role, setRole] = useState(\\"user\\");\\n\\n return (\\n <div>\\n <label>Select Role:</label>\\n <select value={role} onChange={(e) => setRole(e.target.value)}>\\n <option value=\\"user\\">User</option>\\n <option value=\\"manager\\">Manager</option>\\n <option value=\\"admin\\">Admin</option>\\n </select>\\n <Dashboard role={role} />\\n </div>\\n );\\n};\\n\\nexport default App;\\n\\n
Each role dictates what content appears in the UI so you get a more customized experience.
\\nRBAC is a popular choice for frontend applications due to some of the following reasons:
\\nDespite all its pros, RBAC has some limitations, especially when applications require more granular control:
\\nAttributes-based access control, or ABAC, is an access model that makes decisions based on several attributes, including:
\\nFor a simple explanation, it’s like saying, “If the user’s department is X, the document is Y-classified, and it’s during business hours, then allow access.”
\\nSo, because ABAC permissions are dynamic, can they be used on the frontend?
\\nThe short answer is yes. You can show or hide components based on evaluated attributes in order to shape the experience to fit what a user should see or interact with. However, ABAC decisions are dynamic, so UI elements can’t be cached easily, unlike RBAC. For ABAC, every interaction may require recalculating access rules, making the UI less performant, especially in large-scale applications. It can also lead to increased state management complexities.
\\nABAC is best used when roles aren’t expressive enough and you need more granular, real-time control over what the user sees or does.
\\nAccess control list, or ACL, is an access control model that defines permissions at an individual resource level. This means that access is assigned directly to specific users or groups rather than assigning permissions based on roles (RBAC) or attributes (ABAC).
\\nEach resource, such as a file, database record, or API endpoint, has a list of explicit allow or deny rules that determine:
\\nACLs work well in backends, where fine-grained control is needed. But in frontend applications, they’re often a poor fit. The main issue is complexity; the UI has to scan large permission lists in order to decide whether to show or hide elements like buttons or menu items. This doesn’t scale well and quickly becomes hard to manage.
\\nSome teams make it work by loading ACL data after login, storing it in memory (like Redux), and using helper functions or hooks to check permissions. This approach is fine for smaller projects or when you already rely on ACLs on the backend. But it’s more of a workaround than a clean solution; you’ll spend more time wrestling with ACL than actually building product.
\\nRBAC or ABAC is usually better for frontend-level access control because they’re simpler, easier to maintain, and scale more gracefully.
\\nOften regarded as the superior access control approach, Policy-Based Access Control (PBAC) is a model that grants or denies access based on pre-defined policies rather than roles (RBAC), attributes (ABAC), or individual permissions (ACL).
\\n\\nPBAC uses a set of rules and conditions to enforce access decisions dynamically. PBAC is commonly implemented using policy engines such as OPAs (Open Policy Agents), which evaluate logical conditions to determine access control.
\\nIn PBAC, policies are centrally defined and enforced based on a combination of:
\\nNow, can PBAC be used on the frontend? Technically, yes, but that doesn’t mean it’s a good idea.
\\nPBAC is built for real-time security checks, so it should be considered for use only in the backend. When you try to do verifications on the frontend, you’ll run into issues like having to verify live conditions before you can render pages or buttons. This will not only take time, but it can also cause your UI to flicker and make it unresponsive.
\\nSo, while PBAC is powerful, it’s a bit too heavy for frontend use. It shines in backend services, especially in microservice architectures where you want centralized, consistent policy enforcement across APIs.
\\nWhile all four models have their uses, RBAC is still the most practical choice for frontend apps. Here’s how it compares to the others:
\\nModel | \\nStrengths | \\nWeaknesses | \\nBest for | \\n
---|---|---|---|
RBAC (Role-Based) | \\n\\n
| \\n\\n
| \\nSPAs, dashboards, tiered SaaS apps, ecommerce | \\n
ABAC (Attribute-Based) | \\n\\n
| \\n\\n
| \\nApps where roles aren’t enough, e.g., department-based restrictions | \\n
ACL (Access Control List) | \\n\\n
| \\n\\n
| \\nRare edge cases where you need manual overrides | \\n
PBAC (Policy-Based) | \\n\\n
| \\n\\n
| \\nBackend APIs, microservices, enterprise policy enforcement | \\n
RBAC might not be the flashiest model, but for frontend applications, it hits the right balance between simplicity, performance, and control. If your app doesn’t need highly dynamic, context-aware permissions, there’s usually no reason to reach for ABAC or ACL.
\\nHaving covered why RBAC is an ideal choice for frontend applications, let’s demonstrate a simple implementation with Next.js. In this example, we’ll create:
\\nOpen your terminal and navigate to the folder where you want to build this project.
\\nThen, run the following commands:
\\nnpx create-next-app@latest rbac-demo\\ncd rbac-demo\\nnpm install\\n\\n
On installation, you’ll see the following prompts:
\\n✔ Would you like to use TypeScript? … No / Yes\\n✔ Would you like to use ESLint? … No / Yes\\n✔ Would you like to use Tailwind CSS? … No / Yes\\n✔ Would you like your code inside a `src/` directory? … No / Yes\\n✔ Would you like to use App Router? (recommended) … No / Yes\\n✔ Would you like to use Turbopack for `next dev`? … No / Yes\\n✔ Would you like to customize the import alias (`@/*` by default)? … No / Yes\\n\\n
Afterward, Next.js and its required dependencies will be installed.
\\nIn the src
folder, create a config
folder. Inside it, add a file called roles.js
:
export const roles = {\\n user: [\\"viewProfile\\"],\\n manager: [\\"viewProfile\\", \\"viewReports\\", \\"approveRequests\\"],\\n admin: [\\"viewProfile\\", \\"viewReports\\", \\"approveRequests\\", \\"manageSystem\\"],\\n};\\nexport const hasPermission = (role, permission) => {\\n return roles[role]?.includes(permission);\\n};\\n\\n
This maps each role to its allowed permissions and includes a helper function to check if a role has a given permission.
\\n\\nNow, you need a custom hook to manage roles and permissions. Create a hooks
folder in the src
folder, and inside it, add useRBAC.js
:
import { roles } from \\"../config/roles\\";\\nconst useRBAC = (role = \\"user\\") => {\\n const hasPermission = (permission) => {\\n return roles[role]?.includes(permission);\\n };\\n return { hasPermission };\\n};\\nexport default useRBAC;\\n\\n
This hook manages the current role and checks permissions based on it.
\\nCreate a components
folder in the src
folder. Inside it, create a file called Dashboard.js
. Add the following code:
import useRBAC from \\"../hooks/useRBAC\\";\\nconst Card = ({ title, emoji }) => (\\n <div\\n style={{\\n border: \\"1px solid white\\",\\n borderRadius: \\"8px\\",\\n padding: \\"20px\\",\\n marginBottom: \\"15px\\",\\n backgroundColor: \\"#000\\",\\n color: \\"white\\",\\n }}\\n >\\n <h3>\\n {emoji} {title}\\n </h3>\\n <p>This section is visible because your role has permission.</p>\\n </div>\\n);\\nconst Dashboard = ({ userRole }) => {\\n const { hasPermission } = useRBAC(userRole);\\n return (\\n <div>\\n <h2 style={{ color: \\"white\\" }}>Dashboard</h2>\\n {hasPermission(\\"viewProfile\\") && <Card title=\\"Profile Section\\" />}\\n {hasPermission(\\"viewReports\\") && <Card title=\\"Team Reports\\" />}\\n {hasPermission(\\"approveRequests\\") && <Card title=\\"Approval Requests\\" />}\\n {hasPermission(\\"manageSystem\\") && <Card title=\\"System Management\\" />}\\n </div>\\n );\\n};\\nexport default Dashboard;\\n\\n
The dashboard shows different sections based on permissions for the current role.
\\nEdit src/app/page.js
like this:
\\"use client\\";\\nimport { useState } from \\"react\\";\\nimport Dashboard from \\"../components/Dashboard\\";\\nexport default function Home() {\\n const [role, setRole] = useState(\\"user\\");\\n return (\\n <div style={{ textAlign: \\"center\\", marginTop: \\"50px\\" }}>\\n <h1>RBAC in Next.js</h1>\\n <label>Select Role:</label>\\n <select value={role} onChange={(e) => setRole(e.target.value)}>\\n <option value=\\"user\\">User</option>\\n <option value=\\"manager\\">Manager</option>\\n <option value=\\"admin\\">Admin</option>\\n </select>\\n <Dashboard userRole={role} />\\n </div>\\n );\\n}\\n\\n
This lets you switch roles dynamically and updates the dashboard accordingly.
\\nYou need a protected admin-only page to ensure that only users with the right permissions can access a particular page. In the src/app
folder, create a folder called admin
with a page.js
file inside it. Add the following code:
\\"use client\\";\\nimport { useSearchParams, useRouter } from \\"next/navigation\\";\\nimport { useEffect } from \\"react\\";\\nimport useRBAC from \\"../../hooks/useRBAC\\";\\nconst AdminPage = () => {\\n const searchParams = useSearchParams();\\n const role = searchParams.get(\\"role\\") || \\"user\\";\\n const { hasPermission } = useRBAC(role);\\n const router = useRouter();\\n useEffect(() => {\\n if (!hasPermission(\\"manageSystem\\")) {\\n router.push(\\"/forbidden\\");\\n }\\n }, [hasPermission, router]);\\n return (\\n <div\\n style={{ display: \\"flex\\", justifyContent: \\"center\\", marginTop: \\"50px\\" }}\\n >\\n <div\\n style={{\\n padding: \\"30px\\",\\n border: \\"1px solid #ccc\\",\\n borderRadius: \\"8px\\",\\n minWidth: \\"300px\\",\\n }}\\n >\\n <h1>Admin Panel</h1>\\n <p>Welcome to the admin panel. Only admins can see this.</p>\\n </div>\\n </div>\\n );\\n};\\nexport default AdminPage;\\n\\n
If the current role lacks the \\"manageSystem\\"
permission, the user is redirected to a 403 page.
In the src/app
folder, create a folder called forbidden
with a page.js
file inside it. Add the following code:
export default function ForbiddenPage() {\\n return (\\n <div style={{ textAlign: \\"center\\", marginTop: \\"50px\\" }}>\\n <h1>Access Denied</h1>\\n <p>You do not have permission to access this page.</p>\\n </div>\\n );\\n}\\n\\n
Then, run and test the changes with the following command:
\\nnpm run dev\\n\\n
Now, visit http://localhost:3000
and try switching between roles. Navigate to /admin
and test the access control:
This example demonstrates how RBAC works on the frontend. You’ll notice different sections of the dashboard become visible based on the selected role, each one representing a permission.
\\nYou can also test access to a protected route, which in this case is the /admin
, by appending a role to the URL, for example, the URL /admin?role=admin
. If the role lacks permission, you’ll be redirected to a 403 forbidden page.
N.B., this is for demonstration purposes only. Real access control should be enforced on the backend.
\\nIn more complex apps, using a single access control model like RBAC isn’t always enough. Sometimes, mixing models gives you more flexibility, especially when handling edge cases or dynamic UI behavior.
\\nOne common combination is RBAC and ABAC. RBAC handles broad role-based rules, while ABAC fills in the gaps with context-based decisions. For example, you might give all admins access to reports but only let them view reports from their own department. That department check is a dynamic attribute, something RBAC can’t do alone but ABAC can easily handle.
\\nIn rare cases, you might mix RBAC with ACLs. This usually happens when specific users need custom permissions that don’t cleanly fit into any existing role. It’s not ideal, but for manual overrides or one-off exceptions, ACL-style rules can help, but you have to be disciplined and try not to use them too much so that things don’t get out of hand.
\\nAs for PBAC, let the backend sort that out, as it’s too heavy for the browser to deal with. Also, its policy engines are suited for APIs and services but not for displaying menus and buttons.
\\nAccess control on the frontend is about shaping the user experience. It helps you keep the UI clean, reduce confusion, and make sure users only see what they can actually interact with. While several access control models exist, such as ABAC, ACL, and PBAC, not all of them are built with the frontend in mind.
\\nFor most web applications, including SPAs, dashboards, and SaaS applications, RBAC is the most scalable and practical choice. It’s simple to implement, simple to reason about, and gets along well with frameworks like Next.js.
\\nOf course, no model is perfect. In more complex scenarios, blending RBAC with ABAC can help when roles are not sufficiently expressive. Occasionally, ACLs might be convenient for overrides manually, although they require management overhead. And PBAC, while strong, should stay on the backend where performance, security, and centralized policy management are better suited.
\\nThe frontend isn’t your fortress, it’s your storefront. Use access control to guide users, not guard the gates.
\\nHeader image source: IconScout
\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nThere’s been major controversy (rightly so?) surrounding Next.js’s openness, particularly how it was designed not to work well on serverless platforms other than Vercel. In response, developers created a new solution called OpenNext to make Next.js truly portable across all platforms.
\\nIf you’re new to the drama, the idea of making an open-source library truly open might confuse you. This article will clear it up. We’ll cover how Next.js wasn’t designed to be fully portable on serverless platforms, how OpenNext is fixing the issue, how to get started with OpenNext, and what the future of Next.js portability might look like.
\\nNext.js was created to make it easier to build full-stack applications with React. React, by itself, is just a UI library; it doesn’t give you enough structure for building real-world apps. Next.js filled that gap. It introduced file-based routing, server-side rendering (SSR), static site generation (SSG), and even a way to build backend API routes inside the same project. Over time, it evolved from a frontend framework into a full-stack one.
\\nNow, as Next.js added more backend-like features such as API routes, SSR, middleware, and Incremental Static Regeneration (ISR), it leaned toward serverless infrastructure. Serverless made sense because these backend features don’t require a long-running server. Instead, they can run as on-demand functions that spin up, do their job, and shut down. This aligns perfectly with the kind of small, stateless tasks that serverless handles best. So, the Next.js team started optimizing its build output for this model.
\\nVercel is the company that built and maintains Next.js. Naturally, they optimized the framework to run seamlessly on their own platform. When you deploy a Next.js app to Vercel, every API route becomes its own serverless function. Each SSR page is deployed the same way. ISR and middleware are handled behind the scenes by their infrastructure. The whole thing works beautifully, with no extra configuration needed.
\\nHowever, when you try to deploy that same Next.js app to AWS Lambda or Cloudflare Workers, things start to break or behave unpredictably. This happens because those platforms don’t mirror Vercel’s internal structure, and Next.js wasn’t originally built to be fully portable. So you’re either forced to drop some features, build ugly workarounds, or stick with Vercel.
\\nThis is the exact problem OpenNext was created to solve.
\\nOpenNext is a community-driven project that repackages the Next.js build so it can run on any serverless platform. It provides tooling that maps your app’s pages, API routes, and special features like ISR and middleware into formats compatible with AWS Lambda, Netlify Functions, Cloudflare Workers, and more. It basically simulates Vercel’s runtime behavior, so your app works the same way even outside their ecosystem.
\\nWhen you build a Next.js app, it generates a .next/
folder with everything needed to run the app, including static assets, server-side logic, and routing metadata. OpenNext takes that build output and repackages it for other serverless platforms. This is done by splitting your app into platform-specific parts, like Lambda functions for SSR/API, static files for S3 or a CDN, and background jobs for ISR. It wraps the code in the correct runtime handlers for each target and re-implements Vercel-only features like ISR and image optimization using services like S3, DynamoDB, and Sharp.
The Vercel team has started collaborating with OpenNext contributors. They’re now working on standardizing deployment adapters, which should eventually make it easier to deploy Next.js apps anywhere.
\\nThat said, hosting isn’t the only thing people are worried about. There’s been growing drama around the direction of React, Next.js, and Vercel as a whole. Vercel has hired most of the React core team, and a lot of people are uncomfortable with how much control one company now has over the ecosystem. Some are worried that React could slowly shift to favor Next.js-specific ideas, leaving other frameworks out in the cold.
\\nAnother problem is how fast Next.js changes overall. Next.js keeps dropping new features, breaking changes, and API shifts without slowing down. It’s getting harder for teams to build long-term projects without worrying that their entire approach could become outdated overnight.
\\nMaybe people are overreacting. Or maybe it’s a real sign that the ecosystem needs better checks and balances. Either way, OpenNext shows that the community still has a say, and that’s a good thing.
\\nNow we’ll transition into how to use OpenNext with a variety of tools. OpenNext currently provides adapters for Cloudflare, AWS Lambda, and Netlify.
\\nYou can create a new Next.js app pre-configured for Cloudflare Workers using OpenNext by running:
\\nnpm create cloudflare@latest -- my-next-app --framework=next --platform=workers\\n
This command will set up a new Next.js app with all the necessary configs and libraries to make your app work seamlessly on Cloudflare. After the installation, you can also run npm run preview
to locally preview how your app will behave in the Cloudflare Workers runtime, rather than in Node.js.
Once you’re done with development, you can deploy the app by running:
\\nnpm run deploy\\n
To configure OpenNext for Cloudflare Workers on an existing Next.js app, first install the Cloudflare adapter:
\\nnpm install @opennextjs/cloudflare@latest\\n
Next, install Wrangler as a dev dependency:
\\nnpm install --save-dev wrangler@latest\\n
Once both libraries are installed, update your package.json scripts to add commands for building, previewing, and deploying your app:
\\n{\\n \\"scripts\\": {\\n \\"preview\\": \\"opennextjs-cloudflare build && opennextjs-cloudflare preview\\",\\n \\"deploy\\": \\"opennextjs-cloudflare build && opennextjs-cloudflare deploy\\"\\n }\\n}\\n
With this update, you can now run npm run preview
to locally see your Next.js app’s behavior on Cloudflare Workers, or npm run deploy
to deploy it live.
SST provides one of the easiest ways to deploy a Next.js app to AWS. All you need to do is initialize SST in your existing Next.js app by running:
\\nnpx sst@latest init\\n
Next, install the newly added dependencies:
\\nnpm install\\n
Finally, deploy your app with:
\\nnpx sst deploy\\n
You can also follow this tutorial for a step-by-step guide on deploying Next.js to AWS Lambda with SST.
\\nOpenNext also provides an adapter that integrates with Netlify. However, no extra configuration is needed; Netlify automatically detects your Next.js project and applies the necessary settings, so everything works out of the box.
\\nOpenNext allows you to customize how your Next.js app is built for different platforms using an open-next.config.ts
file. This file should be placed at the same level as your next.config.js
file. Once created, you can use it to modify configurations such as caching behavior, server wrappers, and how ISR is handled for your target platform.
If you’re deploying to AWS with SST, first install the @opennextjs/aws
package:
npm install @opennextjs/aws\\n
Then, create the open-next.config.ts
file in your project root. For example, to enable Lambda streaming in your Next.js deployment:
import type { OpenNextConfig } from \\"@opennextjs/aws/types/open-next.js\\";\\n\\nconst config = {\\n default: {\\n override: {\\n wrapper: \\"aws-lambda-streaming\\",\\n },\\n },\\n} satisfies OpenNextConfig;\\n\\nexport default config;\\n
This configuration enables Lambda streaming, which allows your app to stream responses directly from AWS Lambda, thereby improving performance for dynamic content.
\\nIf Cloudflare is your target deployment platform, you should have already installed the @opennextjs/cloudflare
adapter during your initial setup. If not, install it with:
npm install @opennextjs/cloudflare@latest\\n
Then, create the open-next.config.ts
file in your project root. For example, to enable caching with Cloudflare R2:
import { defineCloudflareConfig } from \\"@opennextjs/cloudflare\\";\\nimport r2IncrementalCache from \\"@opennextjs/cloudflare/overrides/incremental-cache/r2-incremental-cache\\";\\n\\nexport default defineCloudflareConfig({\\n incrementalCache: r2IncrementalCache,\\n});\\n
This configuration sets up your app to use Cloudflare R2 for ISR caching, improving performance by storing and serving cached content.
\\nFor more detailed configurations and advanced use cases, explore the OpenNext documentation.
\\nIn this article, we covered how Next.js wasn’t designed to be fully portable on serverless platforms outside of Vercel, with features like caching, ISR, and more breaking when deployed elsewhere. We explained how OpenNext is fixing this problem, looked at the future of Next.js portability, and discussed how to use OpenNext to deploy Next.js apps that work fully on AWS, Cloudflare, and Netlify.
\\nOpenNext is a big step in the right direction. It proves that Next.js apps don’t have to be locked into Vercel’s hosting anymore.
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nPretend you’re constructing a house; you could put up some walls and a roof and say it’s all good. It may stand for a while, but what about when a storm comes? In codebase terms, this is where SOLID principles come into play.
\\nSOLID principles are the architectural blueprint of a code. SOLID is an acronym that encapsulates these principles: single responsibility, open-closed, Liskov substitution, interface segregation, and dependency inversion.
\\nThey ensure that the code is not only strong but also robust. Think about it. Would you prefer a house with a fragile foundation that could collapse at the slightest breeze? Or a house with a strong foundation that could survive any storm?
\\nThe Single Responsibility Principle, or SRP, is the first letter of the SOLID design principles. I’d say it’s one of the most important principles you must understand to write clean and well-understandable code.
\\nLet’s dive a little deeper.
\\nIn short, the SRP says that a class should have one reason to change. This definition is easy to comprehend, but let’s explore the key concepts of SRP further.
\\nSRP clarifies “responsibility,” which is the role of a class in the system. The principle states that this responsibility must be well defined.
\\nTo comprehend what a responsibility within a system entails, below are some concrete aspects of a system:
\\nThese two are separate responsibilities of a system, they are different so you can have a class for each of these responsibilities.
\\nImagine a single employee of a company is responsible for product management, IT support, and human resource management.
\\nIf the company is planning to increase its product management capabilities, it indicates that the employee will be given more work. If IT support is lacking, the employee has to work harder. This means the job will be extremely overwhelming and daunting.
\\nNow imagine a situation where the organization decides to have a focused team for each of the following:
\\nOne can expect focus, efficiency, and great results because each team is specialized in specific tasks and goals.
\\nNow that we’ve discussed the theory behind single responsibility (SRP), let’s examine it in practice. We’ll start with an example of an SRP violation in which a class holds multiple responsibilities, and then refactor it to follow SRP correctly.
\\n\\nLet’s take the following UserService
class in TypeScript. It balances several different roles, including:
This is a violation of SRP because that class has more than one responsibility. These responsibilities should ideally be separated:
\\nclass UserService {\\n constructor(private database: any) {}\\n // Register a user\\n registerUser(username: string, password: string): void {\\n const hashedPassword = this.hashPassword(password);\\n this.database.save({ username, password: hashedPassword });\\n this.sendWelcomeEmail(username);\\n }\\n // Hashes password\\n private hashPassword(password: string): string {\\n return `hashed_${password}`; // Simulating hashing\\n }\\n// Sends a welcome email\\n private sendWelcomeEmail(username: string): void {\\n console.log(`Sending welcome email to ${username}`);\\n }\\n}\\n\\n
When we finally apply SRP, we should separate concerns into different classes:
\\nUserRepository
– This is used to perform database operationsAuthService
– Handles authenticationEmailService
– Sends emails:\\n// Database operations\\nclass UserRepository {\\n save(user: { username: string; password: string }) {\\n console.log(\\"User saved to database:\\", user);\\n }\\n}\\n// Authentication\\nclass AuthService {\\n hashPassword(password: string): string {\\n return `hashed_${password}`; // Simulating hashing\\n }\\n}\\n// Email sending\\nclass EmailService {\\n sendWelcomeEmail(username: string): void {\\n console.log(`Sending welcome email to ${username}`);\\n }\\n}\\n// UserService delegating responsibilities\\nclass UserService {\\n constructor(\\n private userRepository: UserRepository,\\n private authService: AuthService,\\n private emailService: EmailService\\n ) {}\\n registerUser(username: string, password: string): void {\\n const hashedPassword = this.authService.hashPassword(password);\\n this.userRepository.save({ username, password: hashedPassword });\\n this.emailService.sendWelcomeEmail(username);\\n }\\n}\\n// Usage\\nconst userService = new UserService(\\n new UserRepository(),\\n new AuthService(),\\n new EmailService()\\n);\\nuserService.registerUser(\\"JohnDoe\\", \\"securePassword\\");\\n
SRP seems quite simple to comprehend and use, but identifying SRP violations can be intimidating. In this section, we will provide some signs to look out for to know if a class is doing more, and thus violating the single responsibility principle.
\\nWhen a class has more than one reason to change, it breaks the single responsibility design principle.
\\nThe class, in other words, is doing too much and wearing too many hats. For example, you have a class responsible for sending email alerts, user authentication, user authorization, and database transactions. You could already tell that it’s far too much responsibility for a single class, which goes against everything SRP stands for.
\\nIf a class has too many dependencies, this can be an issue for maintainability. Using multiple third-party services or libraries within a class can lead to the class growing and becoming complex. Now, these services are very tightly coupled with the class, and it becomes quite difficult to change the class without affecting the other classes in the system.
\\nAll Object-Oriented Programming (OOP) languages follow the SOLID design principles to their core. However, implementation differs across languages. In this section, we will see a code implementation of the SRP in Python, Java, TypeScript, and C#
\\nSRP states that a workable class has only one responsibility, and thanks to Python’s dynamic and flexible nature, we can easily implement it.
\\nHere’s an example of Python as an SRP-compliant design approach:
\\nclass UserRepository:\\n def save(self, user):\\n print(f\\"Saving user: {user}\\")\\nclass AuthService:\\n def hash_password(self, password):\\n return f\\"hashed_{password}\\"\\nclass EmailService:\\n def send_welcome_email(self, username):\\n print(f\\"Sending email to {username}\\")\\nclass UserService:\\n def __init__(self, user_repo, auth_service, email_service):\\n self.user_repo = user_repo\\n self.auth_service = auth_service\\n self.email_service = email_service\\n def register_user(self, username, password):\\n hashed_password = self.auth_service.hash_password(password)\\n self.user_repo.save({\\"username\\": username, \\"password\\": hashed_password})\\n self.email_service.send_welcome_email(username)\\n# Usage\\nuser_service = UserService(UserRepository(), AuthService(), EmailService())\\nuser_service.register_user(\\"JohnDoe\\", \\"secure123\\")\\n\\n
Java’s strict type system and interface-driven design help structure code to follow SRP.
\\nHere’s an SRP-compliant approach in Java:
\\ninterface UserRepository {\\n void save(User user);\\n}\\nclass DatabaseUserRepository implements UserRepository {\\n public void save(User user) {\\n System.out.println(\\"Saving user to database: \\" + user.getUsername());\\n }\\n}\\nclass AuthService {\\n public String hashPassword(String password) {\\n return \\"hashed_\\" + password;\\n }\\n}\\nclass EmailService {\\n public void sendWelcomeEmail(String username) {\\n System.out.println(\\"Sending welcome email to \\" + username);\\n }\\n}\\nclass UserService {\\n private UserRepository userRepository;\\n private AuthService authService;\\n private EmailService emailService;\\n public UserService(UserRepository userRepository, AuthService authService, EmailService emailService) {\\n this.userRepository = userRepository;\\n this.authService = authService;\\n this.emailService = emailService;\\n }\\n public void registerUser(String username, String password) {\\n String hashedPassword = authService.hashPassword(password);\\n userRepository.save(new User(username, hashedPassword));\\n emailService.sendWelcomeEmail(username);\\n }\\n}\\n\\n
SRP helps to keep services and modules independent in TypeScript, which makes frontend code maintainable.
\\n\\nHere’s the SRP-compliant approach in TypeScript:
\\nclass UserRepository {\\n save(user: { username: string; password: string }): void {\\n console.log(`User saved: ${user.username}`);\\n }\\n}\\nclass AuthService {\\n hashPassword(password: string): string {\\n return `hashed_${password}`;\\n }\\n}\\nclass EmailService {\\n sendWelcomeEmail(username: string): void {\\n console.log(`Sending email to ${username}`);\\n }\\n}\\nclass UserService {\\n constructor(\\n private userRepository: UserRepository,\\n private authService: AuthService,\\n private emailService: EmailService\\n ) {}\\n registerUser(username: string, password: string): void {\\n const hashedPassword = this.authService.hashPassword(password);\\n this.userRepository.save({ username, password: hashedPassword });\\n this.emailService.sendWelcomeEmail(username);\\n }\\n}\\n// Usage\\nconst userService = new UserService(\\n new UserRepository(),\\n new AuthService(),\\n new EmailService()\\n);\\nuserService.registerUser(\\"JohnDoe\\", \\"securePass\\");\\n\\n
C# encourages clean architecture with interfaces and dependency injection, enforcing SRP naturally.
\\nHere’s the SRP-compliant approach in C#:
\\npublic interface IUserRepository {\\n void Save(User user);\\n}\\npublic class UserRepository : IUserRepository {\\n public void Save(User user) {\\n Console.WriteLine($\\"User saved: {user.Username}\\");\\n }\\n}\\npublic class AuthService {\\n public string HashPassword(string password) {\\n return \\"hashed_\\" + password;\\n }\\n}\\npublic class EmailService {\\n public void SendWelcomeEmail(string username) {\\n Console.WriteLine($\\"Sending welcome email to {username}\\");\\n }\\n}\\npublic class UserService {\\n private readonly IUserRepository _userRepository;\\n private readonly AuthService _authService;\\n private readonly EmailService _emailService;\\n public UserService(IUserRepository userRepository, AuthService authService, EmailService emailService) {\\n _userRepository = userRepository;\\n _authService = authService;\\n _emailService = emailService;\\n }\\n public void RegisterUser(string username, string password) {\\n string hashedPassword = _authService.HashPassword(password);\\n _userRepository.Save(new User(username, hashedPassword));\\n _emailService.SendWelcomeEmail(username);\\n }\\n}\\n// Usage\\nUserService userService = new UserService(new UserRepository(), new AuthService(), new EmailService());\\nuserService.RegisterUser(\\"JohnDoe\\", \\"securePass\\");\\n\\n
Like every software design pattern, the single responsibility principle and other SOLID principles ensure that developers start writing high-quality code.
\\nThis principle is important because it allows you to:
\\nBreaking code into smaller, more focused units makes it easier to understand and manage your code.
\\nWhen classes are focused on a single responsibility, they can be reused across your application and different projects, making utility classes highly possible.
\\nSmaller classes with a single responsibility are much easier to test because there are fewer cases to consider.
\\nWhen bugs appear (and they will), you can debug the issue much quicker when your code structure adheres to SRP. SRP ensures that developers can pinpoint the source of a bug faster, as it provides a well-organized codebase that is easy to maintain.
\\nThe single responsibility principle provides a solid blueprint for writing high-quality code, but many developers misinterpret or misuse it. In this section, we will review some of the common misconceptions of SRP that developers should be mindful of.
\\nThis is a common misconception among developers, especially those who are new to the SRP. SRP talks about a class having only one responsibility, which means that everything about a single class should be about only one responsibility. A class can have as many methods as needed, as long as they all work together to ensure that the only responsibility is done well and effectively.
\\nMany developers create needless classes in the name of using the single responsibility principle (SRP). Over-abstraction is the wrong way of applying SRP. Too much abstraction makes it difficult to understand the code.
\\nSRP at the wrong abstraction level occurs when a developer applies the principle of separation at a level that doesn’t align with the structure of the system. The code may technically follow SRP by having only one “reason to change,” but that reason may be so trivial, or detached from the business logic that it adds unnecessary complexity rather than clarity. The wrong abstraction level can pose some serious problems, as maintenance becomes harder and components may lack flexibility.
\\nThe SRP provides an awesome experience in small projects. However, the real usefulness comes when the project’s complexity starts to grow.
\\nSRP applies to system architecture as well as individual classes in large programs, ensuring that various components address various issues.
\\nBelow, we will examine how SRP is effectively used in modular monolithic architecture, which is a common pattern in small to medium projects, and microservices architecture, which is most common in industry-level applications.
\\nAt the service level, each service is in charge of a specific business capability. Microservices inherently enforce SRP.
\\nHere’s how SRP works in microservices:
\\nLet’s consider an example. A PaymentService
executes transactions, whereas a UserService
solely manages operations pertaining to users. If the payment logic needs to be changed, the UserService
will not be impacted.
The application is still a single unit in modular monoliths, but it is separated into SRP-compliant, well-structured modules.
\\nHere’s how SRP works in modular monoliths:
\\nUserModule
, BillingModule
)Let’s think of an example. An e-commerce platform can contain distinct modules for users, products, and orders, each focusing on a particular responsibility rather than a single monolithic service managing everything.
\\nEffective implementation of the Single Responsibility Principle (SRP) requires careful planning and sensible decision-making, in addition to class separation.
\\nMake sure the responsibilities are separate before dividing a class into many parts. Related functions may belong together, so not all multi-method classes violate SRP. Divide a class into distinct classes if it covers several business issues, but refrain from needless fragmentation.
\\nEnforcing SRP at the architectural level helps avoid bloated classes in large applications. A layered architecture, which keeps the user interface, business logic, and data access distinct, is best practice. Additionally, modularizing your code organizes related functionality into well-structured modules or services.
\\nSRP works best in combination with other SOLID principles, such as:
\\nIt is best to use dependency injection to pass services into classes instead of hard-coding dependencies.
\\nOverapplying SRP too early can lead to excessive abstraction, making the system harder to manage. It is best practice to start simple, then refactor when you notice clear SRP violations. Use code smells (e.g., a class with too many dependencies) as signals for refactoring.
\\nIf a class has too many responsibilities, it may be checked with a good test suite. SRP may be broken if a single class needs to be tested for several independent functionalities. If a test becomes complicated because it covers several issues, you should probably rewrite the class.
\\nA key idea in software design is the Single Responsibility Principle (SRP), which guarantees that any class, module, or component has just one reason to change. By using SRP, developers can produce code that is easier to debug, test, and extend, making it clearer, more maintainable, and scalable.
\\nTo provide flexibility and long-term maintainability, SRP carefully separates concerns rather than merely dividing classes.
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nBare simplicity isn’t very common in the rapidly shifting modern web development landscape. Innovative solutions like server components and libraries like React, Vue, Svelte, etc., are transforming how we build server-powered web applications.
\\nHowever, the developer experience these modern solutions offer often doesn’t match the straightforwardness of classic libraries like jQuery. The htmx library aims to bridge this gap by providing a simplified approach to creating server-driven apps.
\\nIn this tutorial, we’ll explore htmx, see some of its practical working examples, and learn how to use it to build a to-do app. The only prerequisite for this htmx tutorial guide is a working knowledge of HTML, JavaScript, HTTP requests, and the Web in general.
\\nhtmx is a small browser-oriented JavaScript library aimed at building no-nonsense, server-driven web apps. It lets you create UIs powered by server responses with simple markup via custom attributes.
\\nWith htmx, you can perform AJAX requests, trigger CSS transitions, and even invoke WebSocket and server-sent events directly from HTML elements.
\\nThe core idea of htmx involves exchanging hypermedia between the client (browser) and the server using HTTP to control the UI and functionality.
\\nIt’s not HTML vs. htmx as one might think. HTML is the standard for marking up web pages, while htmx is a library for adding more flair to HTML, making it ready to talk to the backend in a few quick steps.
\\nIn modern-day web development, making asynchronous requests and updating the UI are two fundamental tasks that involve JavaScript usage.
\\nhtmx takes an entirely different path to achieve that. Instead of relying on XML, REST, or GraphQL to send and receive data, it provides a hypermedia-driven approach where hypertext and media are directly sent and received via AJAX requests.
\\nHere’s a glimpse of simple data-fetching done with htmx directly through your markup:
\\n<button \\n hx-get=\\"/path/to/api\\" \\n hx-swap=\\"innerHTML\\"\\n hx-target=\\"#target-container\\"\\n>Click Me</button>\\n\\n<div id=\\"target-container\\">...</div>\\n\\n
The code above illustrates an HTML button with some htmx attributes. Once this button is clicked, your htmx-powered application sends a GET
request to the API with the hx-get
attribute.
The hypermedia content received in response to this request replaces the inner HTML of #target-container
as specified by the hx-target
and hx-swap
attributes.
We will discuss these attributes soon in the upcoming sections.
\\nSimplicity is one of the main benefits of using htmx, which also positions the library to help you get things done quickly while writing minimal code.
\\n\\nThe htmx library has been criticized in the JavaScript community for its unconventional ways of doing things. Here are some of the areas where htmx may fail to deliver:
\\nEven though htmx is not accepted as warmly as other traditional JavaScript libraries, it’s still improving and has recently added some good changes to its 2.0 and later releases:
\\nhx-on
becomes hx-on:
for better event declarationsBuilding apps with htmx differs from developing apps with JavaScript nowadays. Let’s learn some HTML basics first, from installation to presenting data, and then we’ll take a hands-on approach to use htmx with a modern backend and database solution.
\\nThe easiest way to start with htmx is by including its CDN link directly in your markup, as shown below:
\\n<script src=\\"https://unpkg.com/[email protected]\\"></script>\\n\\n
The script above loads the current stable version of htmx — at the time of writing, version 2.0.4 — on your webpage. Alternatively, you can install htmx with your favorite package manager using the command below:
\\npnpm install [email protected]\\n\\n
This suits the traditional JavaScript project workflow and may require you to make the htmx object accessible globally to use htmx attributes anywhere in the HTML:
\\nimport htmx from \'htmx.org\';\\nwindow.htmx = htmx;\\n\\n
We won’t use any build tool in this guide, but we will utilize Express with Supabase to perform some CRUD operations later. We’ll follow a different approach to access the library.
\\nOnce that’s done, you can start implementing htmx on your webpage. We’ll see major htmx features closely by building a simple Todo app with a backend powered by Express and a database managed by Supabase.
\\nhtmx provides five attributes to make all kinds of AJAX requests directly from HTML elements:
\\nhx-get
— Send a GET
request to the provided URLhx-post
— Send a POST
request to the provided URLhx-put
— Sendोa PUT
request to the provided URLhx-patch
— Send a PATCH
request to the provided URLhx-delete
— Send a DELETE
request to the provided URLWe’ll use nearly all of these attributes in the latter half of the tutorial. We will stick to hx-get
in the next few htmx examples.
We expect a response for every request made to the backend. In htmx, that response is likely to be hypermedia content that goes right on the front end.
\\nThe placement of content received is taken care of by defining targets and swapping policies with hx-swap
and hx-target
attributes:
<button hx-get=\\"path/to/api\\" hx-swap=\\"textContent\\" hx-target=\\"#content\\">\\n Click to load content from path/to/url\\n</button>\\n\\n<div id=\\"content\\">...</div>\\n\\n
Here’s a working demonstration of the same, loading a random tech joke on button click from the Joke API:
\\nSee the Pen
\\nHTMX Joke Generator by Rahul (@c99rahul)
\\non CodePen.
With hx-trigger
, you may specify different web API events to trigger a request, such as:
<div hx-get=\\"path/to/api\\" hx-trigger=\\"load\\" hx-swap=\\"innerHTML\\">\\n This should change once hovered.\\n</div>\\n\\n
Using this approach in our example, we can populate the joke division automatically with the required data:
\\nSee the Pen
\\nHTMX Joke Generator by Rahul (@c99rahul)
\\non CodePen.
The hx-swap
attribute can take a transition value to turn on automatic transition during swapping and settling the content:
<button \\n hx-get=\\"path/to/api\\" \\n hx-target=\\"#content\\"\\n hx-swap=\\"textContent transition:true\\">\\n Click to load content from path/to/url\\n</button>\\n\\n
See the Pen
\\nHTMX Joke Generator (w/ Transitions) by Rahul (@c99rahul)
\\non CodePen.
For perceived performance, you may show loading indicators until the data finishes loading in the specified target. This addition calls for some custom CSS using the .htmx-request
and .htmx-indicator
CSS classes.
The .htmx-request
class is added to the indicator whenever a request is made. It gets removed automatically after the request is completed.
We can add .htmx-indicator
class to our loading indicator to tell htmx about it, and then utilize .htmx-request
class in the CSS to take care of the loader animation as well as the transition of the content:
See the Pen
\\nHTMX Joke Generator (w/ Loading indicator) by Rahul (@c99rahul)
\\non CodePen.
Using these htmx basics, you can start making an app with any backend of your choice. As discussed before, we are pairing it with modern JavaScript-based solutions to observe how adaptable htmx is in modern scenarios.
\\nLet’s learn htmx by building a to-do app. Our objective is to create a simple yet functional htmx CRUD app capable of fetching our to-do data from a database, creating new data, as well as updating and deleting existing data on demand.
\\nOur first step should be creating a project directory to organize the different files of our to-do app. After cding into the project directory, we are ready to add some dependencies necessary to build our app:
\\ncd htmx-todo\\n\\n
We’ll start by installing htmx in the traditional JavaScript way. We’ll also add Express, the widely used JavaScript backend framework, to manage our backend API without hassle:
\\npnpm add [email protected] express\\n\\n
Let’s create a JavaScript file at the root of our project and call it server.js
. This file manages everything backend-related, from connecting the app to the database to serving it in the browser:
// server.js\\nimport path from \'path\';\\nimport { fileURLToPath } from \'url\';\\nimport express from \'express\';\\nimport bodyParser from \'body-parser\';\\nimport { createClient } from \'@supabase/supabase-js\';\\n\\nconst __filename = fileURLToPath(import.meta.url);\\nconst __dirname = path.dirname(__filename);\\n\\nconst app = express();\\nconst port = process.env.PORT || 3000;\\nconst host = process.env.HOST || \'localhost\';\\n\\n
Next, we should pick a database solution to manage our to-do data. I’m going for the Supabase free tier, which is apt for testing purposes and hobby projects. You may follow the underlying steps to continue with Supabase or use a different solution, if you’re more comfortable doing that.
\\nAfter creating a project on Supabase, carefully note down the project URL and the anon key. Follow the screenshot below if you are having a hard time finding these two API secrets:
\\nNow, put these credentials in a .env
file, which should look like the below:
SUPABASE_URL=<Path to your Supabase URL>\\nSUPABASE_KEY=<Your Supabase anon key>\\nSERVER_HOST=localhost\\nSERVER_PORT=5555 // Or anything you\'d like to set\\n\\n
The .env
file lives at the root of the project. You may have different variations of .env
files from shared, development, production, testing, and other environments. I’m sticking to the simplest one here, but you may add more depending on your needs. Learn more about .env files here.
N.B.: Ensure your project has a .gitignore
file with all the .env
files mentioned. This will keep your Supabase API secrets and other sensitive info from being accidentally pushed to a public repository.
Using your Supabase dashboard, either use the Table Creator or run the following SQL query to create a table to hold our to-do list data:
\\nCREATE TABLE todos (\\n id SERIAL PRIMARY KEY,\\n task TEXT NOT NULL,\\n completed BOOLEAN DEFAULT FALSE,\\n created_at TIMESTAMPTZ DEFAULT NOW()\\n);\\n\\n
In the above query, we are adding the following four fields:
\\nid
— A unique ID for each todo tasktask
— A text field to store the task descriptioncompleted
— A boolean property to help track whether the task is complete or notcreated_at
— A timestamp to know when the task was createdMoving on, let’s hit the terminal again and install the Supabase JavaScript client:
\\npnpm add @supabase/supabase-js\\n\\n
We can now access our Supabase credentials from the .env
file and use them as shown below in our app’s backend with the server.js
file:
// server.js\\n/* Previous code... */\\n\\nconst supabaseUrl = process.env.SUPABASE_URL;\\nconst supabaseKey = process.env.SUPABASE_KEY;\\nconst supabase = createClient(supabaseUrl, supabaseKey);\\n\\n
We’ll handle templating with ejs, one of the commonly used templating tools available for JavaScript-powered apps. We’ll also add body-parser, an Express middleware that makes reading request bodies a breeze:
\\npnpm add ejs body-parser\\n\\n
We’ll then instruct Express to employ ejs as the templating engine and use body-parser middleware to parse form data:
\\n// server.js\\n/* Previous code... */\\n\\napp.set(\'view engine\', \'ejs\'); // Tell Express to use ejs\\napp.use(bodyParser.urlencoded({ extended: true })); // For parsing form data\\n\\n
To save the hassle of copying the htmx library file to the public folder to access it, we should ask Express to serve it directly from the package directory with a normal-looking path like js/htmx.min.js
:
// server.js\\n/* Previous code... */\\n\\napp.get(\'/js/htmx.min.js\', (req, res) => {\\n res.sendFile(path.join(__dirname, \'node_modules/htmx.org/dist/htmx.min.js\'));\\n});\\n\\n
I’ll let you improvise on the styling part: feel free to use a framework of your choice or write vanilla CSS if you like. I’m using Tailwind CSS Play CDN to refer directly to the framework in the head of our index.ejs
file.
Let’s define a route handler for the root path in server.js
. Whenever this route is requested by the client (browser), the backend should send some dummy text in response. This will help us check if our app is running correctly:
app.get(\'/\', (req, res) => {\\n res.send(\'Hello, world!\')\\n})\\n\\napp.listen(port, () => {\\n console.log(`Server running on http://${host}:${port}`);\\n});\\n\\n
At last, we ensure that when our app runs, the Express server serves it using the host and port we specified in our .env
file.
It’s time to configure some scripts in the package.json
file that we could use in the terminal to run server.js
to serve our app:
{\\n \\"name\\": \\"htmx-todo\\",\\n \\"...\\": \\"...\\",\\n \\"scripts\\": {\\n \\"start\\": \\"node --env-file=.env server.js\\",\\n \\"dev\\": \\"node --env-file=.env --watch server.js\\"\\n },\\n}\\n\\n
Make sure you are using a recent Node version for effortless file watching and automatic loading of environment files. You may also consider using a tool like Nodemon for a slightly better development workflow, but I’m sticking to the basics to keep things simple and easy to understand.
\\nNow, running pnpm dev
in the terminal should start serving our app, which looks something like the screenshot below:
With the Supabase API, we expect to receive an array of objects containing the data requested from our database. Considering todo
as an object of that array, let’s create some template partials to utilize later with our backend.
Firstly, let’s establish an index template responsible for rendering on the main route of our app. Let’s call it index.ejs
file, which goes to a views
directory, the default way to organize templates in Express. It should look like this, where we are referencing the htmx library using a static path as configured above:
<!-- views/index.ejs --\x3e\\n<!DOCTYPE html>\\n<html>\\n <head>\\n <title>Todo App w/ htmx</title>\\n <meta charset=\\"utf-8\\" />\\n <meta name=\\"viewport\\" content=\\"width=device-width\\" />\\n <script src=\\"/js/htmx.min.js\\"></script>\\n <!-- Add more CSS/JS stuff here if needed --\x3e\\n </head>\\n <body>\\n <!-- more elements will be added here --\x3e\\n </body>\\n</html>\\n\\n
This resembles the traditional HTML approach, where we write only markup, CSS, and JavaScript to lay out a page. We will add more elements to this file in the later sections.
\\nSecondly, add a partials
directory to views
, and create the todo-item.ejs
partial that renders a single item in our to-do list. It uses the unique id
and task
properties of the todo
object to populate the list item. Later, we will use the same id
property to update or delete the todo tasks:
<!-- views/partials/todo-item.ejs --\x3e\\n<li id=\\"todo-<%= todo.id %>\\">\\n <%= todo.task %>\\n</li>\\n\\n
Next up, use todo-item.ejs
in the todo-list.ejs
partial to loop through the entire array of data objects received from the database:
<!-- views/partials/todo-list.ejs --\x3e\\n<div id=\\"todo-list\\">\\n <% if (todos && todos.length > 0) { %>\\n <ul>\\n <% [...todos].forEach(todo => { %> <%- include(\'todo-item\', { todo }) %> <% }) %>\\n </ul>\\n <% } else { %>\\n <div>\\n <p>No todos yet!</p>\\n </div>\\n <% } %>\\n</div>\\n\\n
Lastly, let’s also define an error.ejs
partial to display error messages. You may style it differently to make them appear like errors:
<!-- views/partials/error.ejs --\x3e\\n<div class=\\"...\\">\\n <p><%= message %></p>\\n</div>\\n\\n
As discussed, our backend API is managed using Express. We will use Express to specify the content to render on specific routes while responding to requests with certain endpoints to perform CRUD operations.
\\nWe can use different htmx attributes to make certain requests to our backend, which we will set up in the next few segments.
\\nIn simpler words, sending data means taking the to-do task data from the frontend, sending it to the backend, and storing it in our database.
\\nIn the server.js
file, we should define an endpoint to make a POST
request to our backend, which should then talk to the Supabase client and add data to the database:
// server.js\\n\\napp.post(\'/todos\', async (req, res) => {\\n try {\\n const { task } = req.body;\\n\\n if (!task || task.trim().length === 0) {\\n return res.render(\'partials/error\', { message: \'Task is required\' });\\n }\\n\\n await supabase\\n .from(\'todos\')\\n .insert([{ task: task.trim() }])\\n\\n // Fetch updated list\\n const { data: todos, error: fetchError} = await supabase\\n .from(\'todos\')\\n .select(\'*\')\\n .order(\'created_at\', { ascending: false });\\n\\n\\n if (fetchError) throw fetchError;\\n\\n res.render(\'partials/todo-list\', { todos });\\n } catch (error) {\\n console.error(\'Error creating todo:\', error);\\n res.render(\'partials/error\', { message: error });\\n }\\n});\\n\\n
Whenever a POST
request is made to the /todos
endpoint, the above code parses the todo task from the request body and inserts it into our database. We also fetched an updated list of todos from the database and rendered the todo-list.ejs
template with the freshly updated data.
Let’s also update the index.ejs
template to make POST
requests to this endpoint:
<!-- views/index.ejs --\x3e\\n<!-- Add todo form --\x3e\\n<form\\n hx-post=\\"/todos\\"\\n hx-target=\\"#todo-list\\"\\n hx-swap=\\"outerHTML transition:true\\"\\n hx-on::after-request=\\"this.reset()\\"\\n>\\n <input\\n type=\\"text\\"\\n name=\\"task\\"\\n required\\n />\\n <button type=\\"submit\\">Add</button>\\n</form>\\n\\n
Now, we can send our to-do tasks to our database using the above form. The form element in the above code performs the following operations:
\\nhx-post
to hit the /todos
endpoint in the backendhx-target
hx-swap
hx-on:
to set up an htmx event to reset the form after the request has been madeNote: In the backend, the name
attribute of the text input acts as an anchor point for accessing the data from the request. Ensure that you use the same name attribute to get data from the request body when setting up a POST
request in the backend.
Receiving the data from the database and showing it on the frontend is relatively simple compared to other operations.
\\nTo receive the data, we should establish an endpoint for the GET
request in the backend and then utilize the Supabase API to query the data from our todos
table:
app.get(\'/\', async (req, res) => {\\n try {\\n const { data: todos, error } = await supabase\\n .from(\'todos\')\\n .select(\'*\')\\n .order(\'created_at\', { ascending: false });\\n\\n if (error) throw error;\\n\\n res.render(\'index\', {\\n todos: todos || [],\\n error: null,\\n });\\n } catch (error) {\\n console.error(\'Error fetching todos:\', error);\\n res.render(\'index\', {\\n todos: [],\\n error: \'Failed to load todos\',\\n });\\n }\\n});\\n\\n
We can now employ the templates we’ve created previously to show the data on the frontend. Here’s how we can use them in our index.ejs
file to display the data we receive through a GET
request:
<!-- views/index.ejs --\x3e\\n<!-- Add todo form --\x3e\\n<form\\n hx-post=\\"/todos\\"\\n ...\\n>\\n <!-- Previous setup --\x3e\\n</form>\\n\\n<!-- Todo list container --\x3e\\n<% if (error) { %>\\n <%- include(\'partials/error\', { message: error }) %>\\n<% } else{ %>\\n <%- include(\'partials/todo-list\', { todos }) %>\\n<% } %>\\n\\n
Since we are making this GET
request to the endpoint where our to-dos render by default, we don’t need an hx-get
trigger to control the request.
When our app’s root path is requested, the to-do data is made available to index.ejs
, which then renders in the browser as expected with the help of todo-item.ejs
, or an error is displayed using error.ejs
if something goes wrong.
Updating comes in handy when marking a todo task as completed, while deletion, as obvious, makes sense when you need to delete a todo task.
\\nWe use PUT
and DELETE
requests to send update and deletion requests to the backend. Let’s optimize our todo-item.ejs
template to consume an endpoint we’ll define in our backend for task deletion and updation.
We’ll use the same id
property from the todo
object to construct the endpoint for hx-put
and hx-delete
attributes, as shown in the updated template below:
<!-- views/partials/todo-item.ejs --\x3e\\n<li id=\\"todo-<%= todo.id %>\\">\\n <form\\n hx-put=\\"/todos/<%= todo.id %>\\"\\n hx-target=\\"#todo-<%= todo.id %>\\"\\n hx-swap=\\"outerHTML\\"\\n hx-on:change=\\"this.requestSubmit()\\"\\n >\\n <!-- \\n Marking a name (completed) is important here, this \\n is what we will receive in the request body at \\n the backend.\\n --\x3e \\n <input \\n type=\\"checkbox\\" \\n id=\\"todo-item-<%= todo.id %>\\" \\n name=\\"completed\\" \\n <%= todo.completed ? \'checked\' : \'\' %>\\n / >\\n <input\\n type=\\"hidden\\"\\n name=\\"task\\"\\n value=\\"<%= todo.task %>\\"\\n />\\n <label\\n for=\\"todo-item-<%= todo.id %>\\"\\n style=\\"text-decoration: <%= todo.completed ? \'line-through\' : \'none\' %>\\"\\n >\\n <%= todo.task %>\\n </label>\\n <button\\n type=\\"button\\"\\n hx-delete=\\"/todos/<%= todo.id %>\\"\\n hx-target=\\"#todo-list\\"\\n hx-swap=\\"outerHTML transition:true\\"\\n hx-confirm=\\"Are you sure you want to delete this task?\\"\\n >\\n Delete\\n </button>\\n </form>\\n</li>\\n\\n
The above template is now well-equipped to send the id
property and completion status of a given to-do task in a request through hx-put
and hx-delete
attributes. If you notice carefully, we’ve also added an hx-confirm
attribute to prompt the user to reconsider the deletion.
We should hit the same /todos
endpoint to receive PUT
and DELETE
requests at the backend and then use the Supabase API to query the database accordingly.
Now, we also need the id
and completion status of our to-do task to query our database and update or delete this selected task. In this case, we should use Express’s route parameter to access this id
property from the request.
Here’s how you would set up a route handler for a PUT
request in Express to update our to-do tasks:
// server.js\\napp.put(\'/todos/:id\', async (req, res) => {\\n try {\\n const { id } = req.params;\\n const { task, completed } = req.body;\\n\\n // Create updates object, handling checkbox value\\n const updates = {\\n task: task?.trim(),\\n completed: completed === \'on\'\\n };\\n\\n // Remove undefined values\\n Object.keys(updates).forEach(key => \\n updates[key] === undefined && delete updates[key]\\n );\\n\\n // Update in database\\n await supabase.from(\'todos\').update(updates).eq(\'id\', id);\\n\\n // Get and return updated todo\\n const { data: todo } = await supabase\\n .from(\'todos\').select(\'*\').eq(\'id\', id).single();\\n\\n res.render(\'partials/todo-item\', { todo });\\n } catch (error) {\\n res.render(\'partials/error\', { message: \'Failed to update todo\' });\\n }\\n});\\n\\n
Similarly, we can set up a route handler for a DELETE
request, which picks up the id
for the deleted to-do task from the request parameters:
app.delete(\'/todos/:id\', async (req, res) => {\\n try {\\n const { id } = req.params;\\n\\n // Delete from database\\n await supabase.from(\'todos\').delete().eq(\'id\', id);\\n\\n // Check if any to-dos remain\\n const { data: remainingTodos } = await supabase\\n .from(\'todos\').select(\'*\');\\n\\n // Refresh the #todo-list for proper UI updates\\n res.setHeader(\'HX-Retarget\', \'#todo-list\');\\n\\n // Render the fresh todo list\\n res.render(\'partials/todo-list\', { todos: remainingTodos });\\n } catch (error) {\\n res.render(\'partials/error\', { message: \'Failed to delete to-do\' });\\n }\\n});\\n\\n
If you have noticed, we are setting a htmx-specific HX-Retarget
response header that tells the client to update the #todo-list
element again for a proper UI update to display an empty placeholder element.
Piecing it all together with some Tailwind CSS decorations, our htmx todo app should look like something as shown below:
\\nFeel free to fork this app from here and improve it by adding other htmx features like CSS animations, indicators, and more.
\\nIf you are coming from a React, Vue, or Svelte background, you might not like the way htmx works. You’ll find no component support, no state management, and some more tradeoffs we discussed previously in the article. htmx was created to enhance server-rendered HTML, therefore, it is more suitable for quick, small apps that are expected to make frequent network trips.
\\nTo decide if htmx is for you, you should try it to build more apps. You must try htmx extensions for basic DOM operations, and even go an extra mile by trying WebSocket and SSE extensions.
\\nThe htmx community is steadily growing, with the library garnering over 44k stars on GitHub. Given its popularity and active development, we can expect even more features and improvements in the future.
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nIn this post, we’ll discuss nine alternatives to Create React App, which was deprecated in February 2025:
\\nSince its release in 2016, Create React App (CRA) has been one of the go-to tools for spinning up new React applications. However, in recent years, the React team has been encouraging developers to move away from CRA to other tools in the React ecosystem.
\\nAs of February 14, 2025, CRA has been officially deprecated. While it’s still technically usable, it is no longer being maintained, meaning it won’t receive updates, security patches, or improvements. For long-term projects, continuing to use CRA is not advisable.
\\nSo, what should React developers use instead? In this article, we’ll explore some of the best alternatives to CRA and how they can help you build modern React applications more efficiently.
\\nCreate React App offered an easy, low-configuration solution for setting up React projects. It combined tools like JSX, linting, and hot reloading into a single configuration, speeding up the process of bootstrapping React apps.
\\n“Well then, why was it deprecated?” you ask. Here are some reasons:
\\nThe JavaScript ecosystem evolved over time, and CRA’s build system became outdated. Modern tools like Vite, Parcel, and Rsbuild have emerged, and they offer significantly faster builds and improved performance. CRA, which still relies on webpack, struggled to keep up with these advancements.
\\nCRA’s slow build times and inefficient bundling made it difficult for developers to maintain a smooth workflow. Its hot module replacement (HMR) was also not as fast as other solutions, which made it slower by comparison. Similarly, its poor tree-shaking led to larger bundle sizes and slower load times.
\\nModern React apps need more than just a basic setup. Features like server-side rendering (SSR), static site generation (SSG), code-splitting, routing, and data fetching optimizations have become essential. CRA, being a client-side tool, did not support these features out of the box.
\\nCRA had become difficult to maintain, and with no dedicated team actively working on updates, it became outdated.
\\nInstead of relying on boilerplate tools like CRA, the React ecosystem has been moving toward opinionated frameworks that provide end-to-end solutions for building applications.
\\nHaving understood the React team’s motivation for deprecating CRA, let’s explore alternative solutions, starting with the build tools the team recommended.
\\nVite is a modern frontend build tool that provides an extremely fast development environment for React applications. Unlike CRA, which uses webpack, Vite uses esbuild and native ES modules, resulting in near-instant cold starts and fast HMR.
\\nVite is well-suited for small to large React projects that need fast builds. It’s a great tool to work with if you need a performant webpack alternative that requires minimal configuration. It’s also a good fit for apps that don’t need SSR or SSG.
\\nParcel is a zero-configuration build tool that aims to simplify the setup process for web applications. It automatically handles code splitting, hot module replacement, and asset bundling without the need for extensive configuration.
\\nParcel is a good fit for developers who prefer convention over configuration and want to get started quickly without manual setup.
\\nRsbuild is a modern build tool powered by Rspack, a high-performance JavaScript bundler written in Rust. It offers speed and efficiency and supports various frameworks, making it a solid choice for applications of all sizes.
\\nRsbuild is great for projects where build performance is critical.
\\nThat’s it for the build tools. Now, let’s explore the alternative frameworks and libraries that the React team recommended.
\\nNext.js, created by Vercel, allows developers to build server-rendered applications and static websites. It also addresses some challenges of client-side rendering, like poor SEO and slow initial load times.
Next.js has become widely used in the React ecosystem and is known as one of the most popular React frameworks.
\\nNext.js is suitable for building server-rendered applications, static websites, and projects that require SEO optimization and fast page loads.
\\nRemix is a full-stack React metaframework developed by the creators of React Router. Commonly known as Next.js’ competitor, Remix has cemented its place in the React ecosystem.
\\nRemix is great for building dynamic, data-driven applications where performance is a focus.
\\nReact Router is a popular, and dare I say go-to, routing library for React apps. It allows you to define how different parts of your application respond to navigational elements, ensuring that users can navigate through various components without a full page reload.
React Router is ideal for single-page applications that need dynamic routing and complex navigation structures.
\\n\\nExpo is an open-source framework and a platform built around React Native. It allows you to create robust apps for iOS and Android using JavaScript and React.
With Expo, you can write your application in a single codebase and deploy it across multiple platforms, eliminating the need to write separate code for iOS and Android.
\\nExpo is suited for developers building cross-platform apps.
\\nTanStack Start, currently in beta, is a full-stack React framework built on top of TanStack Router, Nitro, and Vite. While it’s still a new framework, TanStack Start was recommended by the React team.
Tanstack Start is ideal for data-intensive apps that need efficient data fetching and caching. It’s also a good fit for SEO-critical projects since it supports SSR.
\\nRedwoodJS is an opinionated, full-stack web framework designed to help projects scale from side projects to startups. It combines tools like React, GraphQL, Prisma, TypeScript, Jest, and Storybook to provide an end-to-end development workflow.
\\nRedwood is designed to support applications scaling from small projects to full-fledged startups. It is also ideal for building Jamstack apps without the complexity of setting up static site generators or a headless CMS.
\\nHere are some tips and best practices to follow when migrating from Create React App:
\\nBefore migrating, assess your current project to understand its structure, dependencies, and custom configurations. Identify any CRA-specific features or ejected webpack configurations that may need to be replaced.
\\nDifferent tools offer different benefits, so choose the best alternative based on your project’s needs. Note that the alternative you choose will affect the migration process because each tool has a different architecture, configuration style, and feature set.
\\nFor example, Vite relies on ES modules and requires moving index.html
to the root directory while replacing react-scripts
with vite
commands.
If you choose Next.js, you will need to refactor your routing system — which will change your application’s and folder’s structure — since it uses file-based routing instead of React Router.
\\nIt’s always best to execute your migration in stages instead of all at once. The former reduces the chances of risks, downtime, and disruptions in case anything breaks during the process.
\\nWhile the exact process may differ, the phased migration could look like this:
\\nThe journey doesn’t end with migrating all components and dependencies. You also need to perform post-migration tasks to ensure your application runs smoothly in production. These include activities like:
\\nNo man is an island. Luckily, and thankfully, you don’t need to figure out the migration process from scratch. The ecosystem is full of teams that have successfully migrated from one tool to another, and tutorials you can learn from:
\\nExplore these resources to aid you as you migrate:
\\nWith all we’ve discussed in this article and the various resources highlighted, you’re set for a successful migration from Create React App.
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nPOST
\\n \\n POST
request using Axios with React Hooks\\n POST
request errors in Axios\\n GET
requests with Axios\\n POST
request\\n POST
request techniques\\n POST
request timeouts and cancellations in Axios\\n POST
request failing?\\n Sending requests to a web server is one of the most commonly performed tasks in frontend development. Creating a Facebook post, uploading a new Instagram image, sending a post on X, or signing up on a website all send requests to a server.
\\nAxios is a free and open source promised-based HTTP library that runs both in the browser and Node.js. In this article, you’ll learn how to use the Axios POST
method in vanilla JavaScript and frameworks like React. Before proceeding, you should have an understanding of React and how React form elements work.
Axios is a lightweight HTTP client. You can use it to make asynchronous HTTP requests in the browser and Node.js. Because it’s promise-based, you can use promise chaining as well as JavaScript’s async/await.
\\nAxios is also quite similar to the native JavaScript Fetch API. It offers methods like POST
, PUT
, PATCH
, GET
, DELETE
, and more. In this article, we will focus on the POST
method. To understand this method, let’s consider the following scenario.
Take logging into Facebook. When we start using the app, it asks us to either sign up or log in if we already have an account. We must fill in the required form details and submit them to the server.
\\nThe server then verifies the information we submitted and loads the main app or responds with an error message if the provided credentials are incorrect. POST
is the Axios method that allows us to do that. Below is what an Axios POST
request looks like:
axios.post(url[, data[, config]])\\n\\n
From the code above, the Axios POST
method takes three parameters: URL
, data
, and config
. URL
is the server path to which we are sending the request (note that it is a string). data
, which is an object, contains the request body that we’re sending to the server. Finally, config
is the third parameter where you can specify the header content type, authorization, and more. It is also in an object format.
Now that we understand what Axios is and its POST
method, let’s see how to use it.
Editor’s note: This post was last updated by Emmanuel John in April 2025 to add a troubleshooting section for common Axios POST
issues and introduce advanced use cases and optimizations, including Axios interceptors for global request handling and retry logic for failed POST
requests.
You might wonder why you should use Axios instead of the native JavaScript Fetch API. Comparatively, Axios has some advantages over Fetch, which we will look at shortly.
\\nFirst, Axios serializes your request body into JSON string out of the box if the content-type
header is set to application/json
. This differs from the Fetch API, which requires you to first convert the payload to JSON string using JSON.stringify
, as shown below:
// With Fetch\\nfetch(url, {\\n method: \\"POST\\",\\n body: JSON.stringify(payload),\\n headers: {\\n \\"Content-Type\\": \\"application/json\\",\\n },\\n})\\n .then((response) => response.json())\\n .then((data) => console.log(data))\\n .catch((error) => console.error(error));\\n\\n// With Axios\\naxios\\n .post(url, payload)\\n .then((response) => console.log(response))\\n .catch((error) => console.error(error));\\n\\n
Similarly, Axios also de-serializes a JSON response out of the box. With the native Fetch API, you need to parse it using JSON.parse
.
Unlike the built-in Fetch API, Axios provides convenient methods for each of the HTTP request methods. To perform a POST
request, you use the .post()
method, and so on:
axios.post() // to perform POST request\\naxios.get() // to perform GET request\\naxios.put() // to perform PUT request\\naxios.delete() // to perform DELETE request\\naxios.patch() // to perform PATCH request\\n\\n
Other reasons to use Axios POST
over the Fetch API include the following:
POST
Earlier in this article, we explored how to use the Axios POST
method in vanilla JavaScript and React. Let’s start with the former and then proceed to the latter. Keep in mind that this article will focus on React, and we will use the reqres.in
dummy API for our examples.
POST
request in vanilla JavaScriptTo use Axios in vanilla JavaScript, we must first add the CDN link in the HTML before using it in the script
file. Let’s start by creating two files to use: index.html
and index.js
:
// index.html\\n\\n<!DOCTYPE html>\\n<html>\\n <head>\\n <title>Parcel Sandbox</title>\\n <meta charset=\\"UTF-8\\" />\\n </head>\\n <body>\\n <div id=\\"app\\">\\n <h1>Login Account</h1>\\n <form action=\\"\\" id=\\"login-form\\">\\n <label for=\\"email\\">\\n Email\\n <input type=\\"email\\" name=\\"email\\" id=\\"email\\" required />\\n </label>\\n <label for=\\"password\\">\\n Password\\n <input type=\\"password\\" name=\\"password\\" id=\\"password\\" required />\\n </label>\\n <button id=\\"btn\\" type=\\"submit\\">Login</button>\\n </form>\\n </div>\\n <script src=\\"https://unpkg.com/axios/dist/axios.min.js\\"></script>\\n <script src=\\"index.js\\"></script>\\n </body>\\n</html>\\n\\n
This HTML file has a simple login form with two input fields, the email and the password fields, and a submit button. At the bottom, just above the index.js
link, we also added the Axios CDN.
Next, in the index.js
file we created above, we select the form element, and email and password input
elements using their ID
s. We can then add a submit
event handler to the form element. It is triggered whenever we submit the form:
// index.js\\n\\nconst emailInput = document.getElementById(\\"email\\");\\nconst passwordInput = document.getElementById(\\"password\\");\\nconst loginForm = document.getElementById(\\"login-form\\");\\n\\nloginForm.addEventListener(\\"submit\\", (e) => {\\n e.preventDefault();\\n\\n const email = emailInput.value;\\n const password = passwordInput.value;\\n\\n axios\\n .post(\\"https://reqres.in/api/login\\", {\\n email,\\n password,\\n })\\n .then((response) => {\\n console.log(response);\\n });\\n});\\n\\n
You can submit [email protected]
and cityslicka
as the email and password values, respectively. The reqres.in
dummy API will return a response token
with a 200
status code for a successful POST
request:
POST
request in ReactWe can now perform the same POST
request in a React project. We need to first install Axios using npm or Yarn. Depending on your package manager, install Axios by running one of the commands below:
# using npm\\nnpm install axios\\n\\n# using yarn\\nyarn add axios\\n\\n
With Axios installed, let’s open our App.js
file. Unlike in vanilla JavaScript, we need to import Axios before using it. In our handleSubmit
function, we will invoke the Axios POST
method just as we did in the vanilla example above:
import React, { useState } from \\"react\\";\\nimport axios from \\"axios\\";\\n\\nconst App = () => {\\n const [data, setData] = useState({\\n email: \\"\\",\\n password: \\"\\"\\n });\\n\\n const handleChange = (e) => {\\n const value = e.target.value;\\n setData({\\n ...data,\\n [e.target.name]: value\\n });\\n };\\n\\n const handleSubmit = (e) => {\\n e.preventDefault();\\n const userData = {\\n email: data.email,\\n password: data.password\\n };\\n axios.post(\\"https://reqres.in/api/login\\", userData).then((response) => {\\n console.log(response.status, response.data.token);\\n });\\n };\\n\\n return (\\n <div>\\n <h1>Login Account</h1>\\n <form onSubmit={handleSubmit}>\\n <label htmlFor=\\"email\\">\\n Email\\n <input\\n type=\\"email\\"\\n name=\\"email\\"\\n value={data.email}\\n onChange={handleChange}\\n required\\n />\\n </label>\\n <label htmlFor=\\"password\\">\\n Password\\n <input\\n type=\\"password\\"\\n name=\\"password\\"\\n value={data.password}\\n onChange={handleChange}\\n required\\n />\\n </label>\\n <button type=\\"submit\\">Login</button>\\n </form>\\n </div>\\n );\\n};\\n\\n
The above code illustrates how you can make an Axios POST
request in React.
POST
request using Axios with React HooksLet’s look at another example where we create a new user or register as a new user. We will use the useState
React Hook to manage state. Next, we set the value of our text inputs to our states (name
and job
) in our handleChange
function.
Finally, on form submission
, we make our Axios POST
request with the data in our state. See the code below:
// App.js\\n\\nimport React, { useState } from \\"react\\";\\nimport \'./styles.css\';\\nimport axios from \\"axios\\";\\n\\nconst App = () => {\\n const [state, setState] = useState({\\n name: \\"\\",\\n job: \\"\\"\\n });\\n\\n const handleChange = (e) => {\\n const value = e.target.value;\\n setState({\\n ...state,\\n [e.target.name]: value\\n });\\n };\\n\\n const handleSubmit = (e) => {\\n e.preventDefault();\\n const userData = {\\n name: state.name,\\n job: state.job\\n };\\n axios.post(\\"https://reqres.in/api/users\\", userData).then((response) => {\\n console.log(response.status, response.data);\\n });\\n };\\n\\n return (\\n <div>\\n <h1>Register or Create new account</h1>\\n <hr />\\n <form onSubmit={handleSubmit}>\\n <label htmlFor=\\"name\\">\\n Name\\n <input\\n type=\\"text\\"\\n name=\\"name\\"\\n value={state.name}\\n onChange={handleChange}\\n required\\n />\\n </label>\\n <label htmlFor=\\"job\\">\\n Job\\n <input\\n type=\\"text\\"\\n name=\\"job\\"\\n value={state.job}\\n onChange={handleChange}\\n required\\n />\\n </label>\\n <button type=\\"submit\\">Register</button>\\n </form>\\n </div>\\n );\\n};\\n\\n
You can also create a styles.css
file and copy the CSS styling below to style the app. It’s nothing fancy, but it improves the look of the UI:
// styles.css\\n\\nbody {\\n padding: 0;\\n margin: 0;\\n box-sizing: border-box;\\n font-family: sans-serif;\\n}\\nh1 {\\n text-align: center;\\n margin-top: 30px;\\n margin-bottom: 0px;\\n}\\nhr {\\n margin-bottom: 30px;\\n width: 25%;\\n border: 1px solid palevioletred;\\n background-color: palevioletred;\\n}\\nform {\\n border: 1px solid black;\\n margin: 0 28%;\\n padding: 30px 0;\\n display: flex;\\n flex-direction: column;\\n align-items: center;\\n justify-content: center;\\n}\\nlabel {\\n width: 80%;\\n text-transform: uppercase;\\n font-size: 16px;\\n font-weight: bold;\\n}\\ninput {\\n display: block;\\n margin-bottom: 25px;\\n height: 6vh;\\n width: 100%;\\n}\\nbutton {\\n padding: 10px 30px;\\n text-transform: uppercase;\\n cursor: pointer;\\n}\\n\\n
With that, we have our registration app to use our POST
method:
In the previous examples, we used promise chaining throughout. Similarly, you can also use the async/await syntax with Axios. When using async
and await
, we need to wrap our code in a try…catch
block as in the example below:
const handleSubmit = async () => {\\n try {\\n const response = await axios.post(url, userData);\\n console.log(response);\\n } catch (error) {\\n console.log(error);\\n }\\n };\\n\\n
From the above example, we are awaiting a response from our POST
request before we can perform an operation on the response. It works like the .then()
method we saw in the previous example.
POST
request errors in AxiosAs previously stated, one of the advantages of using Axios over the native Fetch API is that it allows us to handle response errors better.
\\nWith Axios, you can catch errors in the .catch()
block and check for certain conditions to establish why the error occurred so that you can handle it appropriately. Let’s see how you can do that below:
const App = () => {\\n const [data, setData] = useState({\\n email: \\"\\",\\n password: \\"\\"\\n });\\n\\n const handleChange = (e) => {\\n const value = e.target.value;\\n setData({\\n ...data,\\n [e.target.name]: value\\n });\\n };\\n\\n const handleSubmit = (e) => {\\n e.preventDefault();\\n const userData = {\\n email: data.email,\\n password: data.password\\n };\\n axios\\n .post(\\"https://reqres.in/api/login\\", userData)\\n .then((response) => {\\n console.log(response);\\n })\\n .catch((error) => {\\n if (error.response) {\\n console.log(error.response);\\n console.log(\\"server responded\\");\\n } else if (error.request) {\\n console.log(\\"network error\\");\\n } else {\\n console.log(error);\\n }\\n });\\n };\\n\\n return (\\n <div>\\n <h1>Login Account</h1>\\n <form onSubmit={handleSubmit}>\\n <label htmlFor=\\"email\\">\\n Email\\n <input\\n type=\\"email\\"\\n name=\\"email\\"\\n value={data.email}\\n onChange={handleChange}\\n />\\n </label>\\n <label htmlFor=\\"password\\">\\n Password\\n <input\\n type=\\"password\\"\\n name=\\"password\\"\\n value={data.password}\\n onChange={handleChange}\\n />\\n </label>\\n <button type=\\"submit\\">Login</button>\\n </form>\\n </div>\\n );\\n};\\n\\n
In the if
condition, we check if there is a response. In other words, if our request was sent and the server responded with an HTTP status code outside the 2xx
range. The HTTP status codes we can get here range from a 400
status code telling us the user does not exist or that there are missing credentials, a 404
error code telling us the page was not found, to a 501
error code telling us the page is unavailable, etc.
In the else if
condition, we checked to see if the request was made, but we received no response. This error is usually due to a network error or being offline.
Finally, if the error received does not fall under the two categories above, then the else
block catches it and tells us what happened, which is most likely because an error occurred in the process of setting up the POST
request. We can also use error.toJSON()
to make our error response more readable.
GET
requests with AxiosThis section is slightly out of the scope of this tutorial, but it covers how to perform multiple GET
requests concurrently using Axios with error handling.
Because Axios returns a promise, we can perform multiple GET
requests using the Promise.all()
static method. Promise.all
takes an iterable of promises as an argument and returns a single promise. It is fulfilled if all the input promises are fulfilled, and is rejected immediately if one of the input promises is rejected:
const BASE_URL = \\"https://reqres.in/api/users\\";\\nconst userIds = [1, 2, 3];\\n\\nPromise.all(userIds.map((userId) => axios.get(`${BASE_URL}/${userId}`)))\\n .then((responses) => {\\n const users = responses.map(({ data }) => data);\\n console.log(users);\\n })\\n .catch((error) => {\\n if (error.response) {\\n // the request was made but the server responded with a \\n // status code outside the 2xx range\\n console.log(error.response);\\n } else if (error.request) {\\n // the request was made but no response was received\\n console.log(error.request);\\n } else {\\n // something happened when setting up the request\\n console.log(error.toJSON());\\n }\\n });\\n\\n
In the example above, we have an array of user IDs. We mapped through the array and used Axios to initiate GET
requests to our API. Promise.all
fulfills after all the promises have been fulfilled, and rejects immediately if one of the promises is rejected.
Additionally, Axios has the built-in axios.all
and axios.spread
helper functions for making concurrent requests. They have been deprecated, though they may still work. Instead, you should use Promise.all
, as in the example above.
One of the benefits of using Axios over the built-in Fetch API is that Axios gives you the ability to intercept requests and responses. With Axios interceptors, you can modify requests and responses before handling them in your fulfillment and rejection handlers.
\\nYou can mount an Axios request interceptor to intercept and modify request config objects like so:
\\nconst myAxios = axios.create();\\n\\nmyAxios.interceptors.request.use(\\n (config) => {\\n //Do something with the config object\\n //before sending the network request\\n return config;\\n },\\n (error) => {\\n //Do something with request error\\n //e.g. Log errors for easy debugging\\n console.error(error);\\n return Promise.reject(error);\\n }\\n);\\n\\n
You should be aware that you can only add interceptors to a custom Axios instance, as in the example above.
\\n\\nSimilarly, you can mount a response interceptor to modify response objects like so:
\\nconst myAxios = axios.create();\\n\\nmyAxios.interceptors.response.use(\\n (response) => {\\n //Do something with this response object\\n //Any HTTP status code in the 2xx range\\n //will trigger this response interceptor\\n\\n console.log(response.data);\\n return response;\\n },\\n (error) => {\\n //Any HTTP reponse code outside the\\n //2xx range will trigger this response\\n //error interceptor\\n\\n //Do something with response error\\n //e.g. Log errors for easy debugging\\n console.error(error);\\n return Promise.reject(error);\\n }\\n);\\n\\n
There are several use cases for Axios request interceptors. One of the main use cases is to authenticate users. Instead of manually adding authentication tokens, such as JWTs, to every request, you can add the token to the config object in the request interceptor.
\\nYou can also use request interceptors to transform request data. You can format the request object or include additional information, such as the request timestamp, to the payload before sending it to the server.
\\nRequest interceptors also come in handy for monitoring and logging purposes. You can log API endpoints, request methods, and any other request data that you can use later for debugging.
\\nThere are several use cases for intercepting responses in Axios. I will highlight some of them below.
\\nThe HTTP response you get from the server may have a payload with a different data type than the data type your application expects, especially when sourcing data from a third-party API. For example, the server response may be in XML format, but your frontend code expects JSON. You can intercept the HTTP response and transform the payload into JSON.
\\nSimilarly, the server response may contain more data than your application needs. You can intercept the response and extract only the data your application needs.
\\nYou can also use response interceptors for error handling. You can get different kinds of errors from the server. Instead of having error handlers littered throughout your codebase, you can handle these errors centrally in a response interceptor. You can intercept the HTTP
responses, log the errors for debugging, and handle them appropriately.
Such central error handling ensures your code is organized and maintainable. It will also ensure you provide appropriate client feedback regarding the success and failure of the POST
request.
POST
requestOrdinarily, when posting a simple object using Axios, you pass a plain JavaScript object to a POST
request body, and by default, Axios will serialize your payload to a JSON string as in the code below.
By default, Axios will set the Content-Type
header to application/json
:
axios\\n .post(\\"/submit\\", { name: \\"Jane Doe\\", email: \\"[email protected]\\" })\\n .then((response) => console.log(response))\\n .catch((error) => console.error(error));\\n\\n
You can also submit an HTML form data as JSON. However, you need to set the Content-Type
header to application/json
like so:
axios\\n .post(\\"/login\\", document.getElementById(\\"login-form\\"), {\\n headers: { \\"Content-Type\\": \\"application/json\\" },\\n })\\n .then((response) => console.log(response))\\n .catch((error) => console.error(error));\\n\\n
Depending on the data you want to transmit via a POST
request, sometimes you may want Axios to encode your payload in a different format than JSON, such as when uploading text files, images, audio, videos, and other multimedia files.
For the latest versions, Axios can encode the request body to multi-part form data out of the box if you explicitly set the Content-Type
header to multipart/form-data
. It’s the encoding you use when uploading files:
const response = await axios.post(\\n \\"/submit\\",\\n {\\n name: \\"Jane Doe\\",\\n email: \\"[email protected]\\",\\n avatar: document.querySelector(\\"input[name=\'avatar\']\\").files[0],\\n },\\n {\\n headers: {\\n \\"Content-Type\\": \\"multipart/form-data\\",\\n },\\n }\\n);\\n\\n
Similarly, with the latest versions of Axios, you can set the Content-Type
header to x-www-form-urlencoded
if you want Axios to URL encode your payload out of the box:
const response = await axios.post(\\n \\"/submit\\",\\n { name: \\"Jane Doe\\", email: \\"[email protected]\\" },\\n {\\n headers: {\\n \\"Content-Type\\": \\"application/x-www-form-urlencoded\\",\\n },\\n }\\n);\\n\\n
For earlier versions of Axios, you will need to URL encode the payload using APIs such as URLSearchParams
or a third-party npm package before posting it.
POST
request techniquesCross-site request forgery (or XSRF for short) is a method of attacking a web-hosted app in which the attacker disguises themself as a legal and trusted user to influence the interaction between the app and the user’s browser. There are many ways to execute such an attack, including XMLHttpRequest
.
Fortunately, Axios is designed to protect against XSRF by allowing you to embed additional authentication data when making requests. This enables the server to discover requests from unauthorized locations.
\\nHere’s how this can be done with Axios:
\\nconst options = {\\n method: \'post\',\\n url: \'/login\',\\n xsrfCookieName: \'XSRF-TOKEN\',\\n xsrfHeaderName: \'X-XSRF-TOKEN\',\\n};\\n\\n// send the request\\naxios(options);\\n\\n
POST
request timeouts and cancellations in AxiosFor some reason, the server can sometimes delay responding to user requests, or the network connection becomes unavailable. Therefore, you will have to timeout and cancel certain requests.
\\nIn Axios, you can set the response to timeout using the timeout
property of the config object like so:
axios\\n .post(\\"/submit\\", {}, { timeout: 1500 })\\n .then((response) => {\\n console.log(response);\\n })\\n .catch((error) => {\\n if (error.response) {\\n console.log(error.response);\\n } else if (error.request) {\\n console.log(error.request);\\n if (error.code === \\"ECONNABORTED\\") {\\n console.log(\\"Connection aborted\\");\\n }\\n } else {\\n console.log(\'Error\', error.message);\\n }\\n });\\n\\n
In the above example, Axios aborts the network connection if it fails to get a response from the server within 1.5 seconds. If the error is thrown because of a timeout, Axios will set the error code to ECONNABORTED
.
Alternatively, you can also timeout using the AbortSignal
API. AbortSignal
is now supported both in the browser and Node.js:
axios\\n .post(\\"/submit\\", {}, { signal: AbortSignal.timeout(1500) })\\n .then((response) => {\\n console.log(response);\\n })\\n .catch((error) => {\\n console.log(error);\\n });\\n\\n
Instead of a timeout, you can use the AbortController
API to explicitly abort the request. This is useful in situations where a user navigates away from a given page after a network request has been initiated:
const ac = new AbortController();\\n\\naxios\\n .post(\\"/submit\\", {}, { signal: ac.signal })\\n .then((response) => {\\n console.log(response);\\n })\\n .catch((error) => {\\n console.log(error);\\n });\\n\\nac.abort(\\"Request aborted\\");\\n\\n
You can use FormData
to handle file uploads. Avoid setting the Content-Type
header manually because if you do, the boundary
required for multipart/form-data
won’t be added automatically, leading to errors:
const formData = new FormData();\\nformData.append(\'file\', file);\\nformData.append(\'key\', \'value\');\\n\\naxios.post(\'/upload\', formData).then(response => console.log(response.data));\\n\\n
Set timeouts to avoid hanging requests and implement retry logic for network issues or server downtime:
\\nconst apiClient = axios.create({ timeout: 5000 });\\n\\nasync function makeRequestWithRetry(url, data, retries = 3) {\\n try {\\n return await apiClient.post(url, data);\\n } catch (error) {\\n if (retries > 0) {\\n return makeRequestWithRetry(url, data, retries - 1);\\n }\\n throw error;\\n }\\n}\\n\\nmakeRequestWithRetry(\'/endpoint\', { key: \'value\' });\\n\\n
Implement throttling to limit the number of requests sent within a specific time frame:
\\nlet isThrottling = false;\\n\\nasync function throttledRequest(url, data) {\\n if (isThrottling) {\\n console.log(\'Throttling in effect, please wait...\');\\n return;\\n }\\n\\n isThrottling = true;\\n const response = await axios.post(url, data);\\n isThrottling = false;\\n\\n return response.data;\\n}\\n\\nthrottledRequest(\'/endpoint\', { key: \'value\' });\\n\\n
POST
request failing?Your Axios POST
request might not be working due to several common issues. Ensure the API URL is correct, and check that the required headers, such as Content-Type
, are properly set. CORS issues could also block the request. If you control the backend, enable CORS there, or use a proxy during development. Server-side issues or incorrect API implementation could be the cause, so test the API using tools like Postman or curl
to confirm it is functioning as expected.
The real fix for CORS issues is on the server. You need to specify the methods, headers, and origins that can send requests to the server in your backend. You can also use *
to make your APIs public:
const express = require(\'express\');\\nconst cors = require(\'cors\');\\n\\nconst app = express();\\n\\napp.use(cors({\\n origin: \'*\', // for public APIs\\n methods: [\'GET\', \'POST\', \'PUT\', \'DELETE\'],\\n allowedHeaders: [\'Content-Type\', \'Authorization\'],\\n}));\\n\\napp.use(express.json());\\n\\n
Here is what your axios config should look like:
\\naxios.post(\'http://localhost:5000/api\', data, {\\n withCredentials: true,\\n headers: {\\n \'Content-Type\': \'application/json\',\\n \'Authorization\': \'Bearer token\',\\n },\\n});\\n\\n
To send a JSON payload in an Axios POST
request, you simply pass the JSON data as the second argument to the axios.post
method. Axios automatically sets the Content-Type
header to application/json
when you provide an object as the payload:
import axios from \'axios\';\\n\\nconst payload = {\\n name: \'John Doe\',\\n email: \'[email protected]\',\\n};\\n\\naxios.post(\'https://test.com/api/login\', payload)\\n .then(response => {\\n console.log(\'Response:\', response.data);\\n })\\n .catch(error => {\\n console.error(\'Error:\', error);\\n });\\n\\n
For custom headers, include a third argument containing the header config:
\\naxios.post(\'https://test.com/api/login\', payload, {\\n headers: {\\n \'Authorization\': \'Bearer your-token\',\\n \'Custom-Header\': \'custom-value\'\\n }\\n})\\n .then(response => {\\n console.log(\'Response:\', response.data);\\n })\\n .catch(error => {\\n console.error(\'Error:\', error);\\n });\\n\\n
Fixing Content-Type
and JSON payload errors in HTTP
requests often involves ensuring proper configuration of headers and data formatting. When sending a JSON payload, explicitly include the Content-Type: application/json
header in your request:
axios.post(\'/api/endpoint\', payload, {\\n headers: { \'Content-Type\': \'application/json\' }\\n});\\n\\n
Also, check the API documentation to confirm if any additional headers or specific data formats are required.
\\nWhen dealing with authentication failures in APIs like API keys, tokens, or OAuth, it’s important to debug the root cause to apply the appropriate fixes.
\\nThe most common causes of authentication failures are:
\\nHandling authentication with Axios for a POST
request usually requires including credentials in the request headers, depending on the API’s authentication method. Common approaches include:
Bearer ${token}
in the Authorization headerx-api-key
.Axios automatically stringifies the data
object and sets the Content-Type
header to application/json
. With Fetch, you need to handle these manually.
Axios has better error handling by automatically rejecting promises for HTTP
response statuses outside the 2xx range.
Fetch, on the other hand, resolves the promise even for errors, requiring additional steps to check the response status. While Fetch is a lightweight and native option for API requests, it requires more manual implementation of Axios’ built-in functionalities.
\\nAxios is a popular promise-based HTTP library that you can use both in the browser and Node.js. You can use it to make HTTP POST
, PUT
, PATCH
, GET
, and DELETE
requests both on the client and server side. In this article, our focus was on the POST
method.
Unlike the built-in Fetch API, Axios has several built-in features. With Axios, you can intercept and modify request and response objects. It also provides built-in features for automatic JSON serialization and deserialization and superior error handling capabilities to the Fetch API.
\\nThough powerful and feature-rich, you should be aware that Axios is a third-party package. Like most third-party packages, Axios has its downsides. You need to consider the extra bundle size, security, licensing requirements, and long-term maintenance before using.
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nstructuredClone
\\n AbortController
\\n ResizeObserver
\\n Modern web browsers ship with a growing number of powerful, native JavaScript APIs that let developers build more dynamic, performant, and user-friendly applications — no external libraries required. In this post, we’ll explore six of these APIs: structuredClone
, EyeDropper, AbortController
, Intersection Observer, ResizeObserver
, and the Clipboard API. For each, we’ll explain what problems it solves, when to use it, and how to implement it.
Before we dive in, it’s important to know why these APIs aren’t more widely known, despite how useful they are:
\\nstructuredClone
Deep copying objects has always been difficult in JavaScript. Aside from using an external library, one of the go-to solutions was to stringify the object and then JSON parse it. Though this works for simple objects, it fails when:
\\nundefined
, NaN
, Infinity
, etc.Enter structuredClone
, a widely supported method for deep cloning complex objects without relying on an external library. While it’s been used internally for a while, it was only made available in the public API a few years ago. Say goodbye to Lodash!
// Original object with nested data\\nconst original = {\\n name: \\"Richardson\\",\\n age: 28,\\n address: {\\n city: \\"New York\\",\\n street: \\"Wall Street\\",\\n zip: \\"10001\\"\\n },\\n hobbies: [\\"reading\\", \\"hiking\\"],\\n created: new Date()\\n};\\n\\n// Create a deep clone using structuredClone\\nconst copy = structuredClone(original);\\n\\n// Modify the clone\\ncopy.name = \\"Jane\\";\\ncopy.address.city = \\"Los Angeles\\";\\ncopy.hobbies.push(\\"music\\");\\n\\n// The original object remains unchanged\\nconsole.log(\\"Original:\\", original);\\nconsole.log(\\"Clone:\\", copy);\\n\\n
See how the original object remains unchanged? Also, observe that the date in the created
field is a date object that is independent of original.created
. There is no explicit conversion of string to date here — it is all taken care of by the structuredClone
function. The map against the address
field is a completely new object, and so is the array against the hobbies
key.
The EyeDropper API allows developers to build a color picker natively without relying on any component library or package. It is still experimental and available only in Chrome, Edge, and Opera. It comes in handy when building apps that allow users to edit or manipulate something, such as a photo editor or a whiteboarding or drawing application.
\\nBelow is a quick example code of how to use the EyeDropper API:
\\n<button id=\\"pickColor\\">Pick a Color</button>\\n<div\\n id=\\"colorDisplay\\"\\n style=\\"width: 100px; height: 100px; border: 1px solid #ccc\\"\\n></div>\\n<br />\\n<div>Pick a color</div>\\n<br />\\n<div style=\\"display: flex; flex-direction: row; height: 5rem\\">\\n <div style=\\"background-color: brown; width: 5rem\\"></div>\\n <div style=\\"background-color: crimson; width: 5rem\\"></div>\\n <div style=\\"background-color: blueviolet; width: 5rem\\"></div>\\n <div style=\\"background-color: chartreuse; width: 5rem\\"></div>\\n <div style=\\"background-color: darkgreen; width: 5rem\\"></div>\\n</div>\\n<script>\\n document.getElementById(\\"pickColor\\").addEventListener(\\"click\\", async () => {\\n if (\\"EyeDropper\\" in window) {\\n const eyeDropper = new EyeDropper();\\n try {\\n const result = await eyeDropper.open();\\n document.getElementById(\\"colorDisplay\\").style.backgroundColor =\\n result.sRGBHex;\\n } catch (e) {\\n console.error(\\"Error using EyeDropper:\\", e);\\n }\\n } else {\\n alert(\\"EyeDropper API is not supported in this browser.\\");\\n }\\n });\\n</script>\\n\\n
In this code snippet, we create a button element with ID pickColor
, and add a click listener. When clicked, we check if the browser supports the EyeDropper API. If it does, we create a new EyeDropper instance and open it.
The open
function returns a promise that resolves to a color value selected using the eye dropper tool. When the color is selected, we change the div’s style to match the selected background color. If the browser doesn’t support the EyeDropper API, we throw an alert with an error message.
EyeDropper demo
\\nAbortController
One of the common problems when building a search UI component is handling stale requests. For example, a request is triggered when a user types a character in an input box. To prevent unnecessary requests from being triggered, we can add a debounce. But now, imagine the user starts typing again after the debounce duration or maybe navigates to a different page. In that case, the already triggered request cannot be canceled, and we might get stale results that we don’t care about. This can hamper user experience and load the API server unnecessarily.
\\nTo abort requests that are already sent, we can use AbortController
. Let’s take a look at an example of this API in action:
import React, { useEffect, useState } from \'react\';\\n\\nfunction SearchComponent({ query }) {\\n const [results, setResults] = useState([]);\\n const [loading, setLoading] = useState(false);\\n\\n useEffect(() => {\\n const controller = new AbortController();\\n const signal = controller.signal;\\n\\n const debounceRequest = setTimeout(() => {\\n setLoading(true);\\n fetch(`https://api.example.com/search?q=${query}`, { signal })\\n .then(res => res.json())\\n .then(data => {\\n setResults(data.results);\\n setLoading(false);\\n })\\n .catch(err => {\\n if (err.name === \'AbortError\') {\\n console.log(\\"Request was aborted\\")\\n } else {\\n console.log(\'Search failed\', err);\\n setLoading(false);\\n }\\n });\\n }, 500);\\n\\n return () => {\\n clearTimeout(debounceRequest);\\n controller.abort();\\n };\\n }, [query]);\\n\\n return (\\n <div>\\n {loading && <p>Searching...</p>}\\n <ul>\\n {results.map(r => (\\n <li key={r.id}>{r.name}</li>\\n ))}\\n </ul>\\n </div>\\n );\\n}\\n\\n
There are a few things to note here. The query
prop is given as an input to the SearchComponent
, and is added to the dependency array of useEffect
Hook. So, every time the query
changes, the useEffect
Hook is called.
When the component is mounted and when the query
prop changes, the useEffect
Hook runs, and it creates an instance of AbortController
. A function with 500ms debounce is created.
After 500ms, the function is called, and it makes an API call to fetch data based on the query
passed to it. Upon receiving the response, the state is updated and displays the results in an unordered list.
Also, observe how signal
from AbortController
is passed to the fetch
call.
If the component gets unmounted (user navigates to a different route) or if the query changes (user types in a new character after 500ms but before the search results are displayed), the cleanup function is called, which clears the timeout and aborts the fetch call.
\\n\\nN.B., if you are using Axios with AbortController
, make sure that you are using v0.22.0 and above. The older versions don’t support AbortController
for cancelling requests.
Are you trying to lazy-load images to optimize page load speeds? Or maybe load and play videos only when they’re visible? Maybe you’re trying to implement infinite scroll? Well, you’re in luck — you don’t need any npm packages to do any of that. Enter the Intersection Observer API.
\\nIntersection Observer is a browser API that allows you to run code when elements enter or leave a viewport or another element. Intersection Observer is natively supported, so it is a clean and performant solution to implement lazy loading or infinite scrolling.
\\nLet’s take a look at an example where we can use Intersection Observer to lazy load images:
\\n<!DOCTYPE html>\\n<html lang=\\"en\\">\\n <head>\\n <meta charset=\\"UTF-8\\" />\\n <title>Lazy Load Images</title>\\n <style>\\n body {\\n font-family: sans-serif;\\n margin: 0;\\n padding: 20px;\\n background: #f5f5f5;\\n }\\n img {\\n width: 100%;\\n max-width: 600px;\\n margin: 30px auto;\\n display: block;\\n min-height: 500px;\\n background-color: #ddd;\\n object-fit: cover;\\n transition: opacity 0.5s ease-in-out;\\n opacity: 0;\\n }\\n img.loaded {\\n opacity: 1;\\n }\\n </style>\\n </head>\\n <body>\\n <img\\n data-src=\\"https://images.unsplash.com/photo-1506744038136-46273834b3fb?w=800\\"\\n alt=\\"Forest\\"\\n />\\n <img\\n data-src=\\"https://images.unsplash.com/photo-1529626455594-4ff0802cfb7e?w=800\\"\\n alt=\\"Person\\"\\n />\\n <img\\n data-src=\\"https://images.unsplash.com/photo-1516117172878-fd2c41f4a759?w=800\\"\\n alt=\\"Laptop\\"\\n />\\n <img\\n data-src=\\"https://images.unsplash.com/photo-1741514376184-f7cd449a0978?w=800\\"\\n alt=\\"Mountains\\"\\n />\\n <img\\n data-src=\\"https://images.unsplash.com/photo-1743062356649-eb59ce7b8140?w=800\\"\\n alt=\\"Desert\\"\\n />\\n <img\\n data-src=\\"https://images.unsplash.com/photo-1470770903676-69b98201ea1c?w=800\\"\\n alt=\\"Bridge\\"\\n />\\n <!-- Some more images here --\x3e\\n <img\\n data-src=\\"https://images.unsplash.com/photo-1518609878373-06d740f60d8b?w=800\\"\\n alt=\\"Sunset\\"\\n />\\n <script>\\n const images = document.querySelectorAll(\\"img[data-src]\\");\\n const observer = new IntersectionObserver(\\n (entries, observer) => {\\n entries.forEach((entry) => {\\n if (!entry.isIntersecting) return;\\n const img = entry.target;\\n img.src = img.dataset.src;\\n img.onload = () => img.classList.add(\\"loaded\\");\\n img.removeAttribute(\\"data-src\\");\\n observer.unobserve(img);\\n });\\n },\\n {\\n root: null,\\n threshold: 0.1,\\n }\\n );\\n images.forEach((img) => observer.observe(img));\\n </script>\\n </body>\\n</html>\\n\\n
There are a few interesting things to note here. First, we have many img
tags with the image source URL set to the data-src
attribute. This prevents them from loading as soon as the page is rendered.
In the script
tag, we run some JavaScript code. First, we start by collecting all img
tags using a querySelector
. These will be used later for setting up the observer. Then, we create a single instance of IntersectionObserver
. To the constructor, we pass a function that gets called when the visibility of any of the observed elements changes.
Say, for example, when a user scrolls and the fourth image becomes visible in the viewport, the callback is triggered. The callback takes two arguments: entries
, which is a list of all observed elements whose visibility has changed, and observer
, which is the instance that can be used to perform cleanup actions.
Inside the callback, we only process entries (in our case, images) that intersect our viewport. For each image that is visible in the browser window, we add a src
attribute and remove the data-src
attribute. This then downloads the images and displays them in the window.
As a part of the cleanup, we unobserve the image using the unobserve
method on the observer. We pass two options to the constructor:
root
: This is the element the observer uses to check if the observed elements are intersecting or not. To set it to the viewport, we pass null
threshold
: This is a number between 0 and 1. This number decides after how much visibility the callback should be triggered. For example, if this value is set to 0.2, then the callback is triggered when the observing elements become 20% visible, either on entry or exitIntersection Observer is supported by all major browsers.
\\nResizeObserver
If you are building a web application with a complex UI (e.g., a drag-and-drop website builder or form builder), you might have faced the need to run code when a specific HTML element changes size. However, HTML elements don’t emit resize events (except for the window
object, which isn’t helpful in most cases). Previously, your best options were workarounds like MutationObserver
or setInterval
. But not anymore!
ResizeObserver
allows us to observe an HTML element and run code when its size changes. It is supported by all major browsers and can be used in a variety of use cases, including auto-scaling text, rendering responsive charts, handling virtualized lists, and building size-aware UI components. It’s also super easy to use.
Let’s see how we can use ResizeObserver
to auto-scale text based on its parent’s size:
<!DOCTYPE html>\\n<html lang=\\"en\\">\\n <head>\\n <meta charset=\\"UTF-8\\" />\\n <title>Auto-Scaling Text</title>\\n <style>\\n body {\\n font-family: sans-serif;\\n padding: 2rem;\\n }\\n .container {\\n width: 60%;\\n height: 200px;\\n border: 2px dashed #aaa;\\n resize: horizontal;\\n overflow: hidden;\\n padding: 1rem;\\n }\\n .autoscale-text {\\n font-weight: bold;\\n white-space: nowrap;\\n overflow: hidden;\\n text-overflow: ellipsis;\\n transition: font-size 0.2s ease;\\n }\\n </style>\\n </head>\\n <body>\\n <div class=\\"container\\" id=\\"textBox\\">\\n <div class=\\"autoscale-text\\" id=\\"text\\">\\n Resize container to auto scale this text.\\n </div>\\n </div>\\n <script>\\n const container = document.getElementById(\\"textBox\\");\\n const text = document.getElementById(\\"text\\");\\n const resizeObserver = new ResizeObserver(() => {\\n const containerWidth = container.clientWidth;\\n const minFontSize = 12;\\n const maxFontSize = 70;\\n const minWidth = 200;\\n const maxWidth = 800;\\n const clampedWidth = Math.max(\\n minWidth,\\n Math.min(maxWidth, containerWidth)\\n );\\n const ratio = (clampedWidth - minWidth) / (maxWidth - minWidth);\\n const fontSize = minFontSize + (maxFontSize - minFontSize) * ratio;\\n text.style.fontSize = fontSize + \\"px\\";\\n });\\n resizeObserver.observe(container);\\n </script>\\n </body>\\n</html>\\n\\n
Let’s break this down. We have a div
with some text in it. This div
is resizeable (thanks to the resize:horizontal
CSS property). When this div changes its size, the font size of the text inside it auto-scales.
In the script
tag, we create a ResizeObserver
. To its constructor, we pass a callback function that is called when any of the observed elements’ sizes change. In the callback, we calculate the appropriate font size based on how large the div is with respect to its min and max width. We update the font size of the text element inline by updating its style attribute.
Similar to Intersection Observer, we can use one ResizeObserver
to observe multiple elements:
resizeObserver = new ResizeObserver((entries) => {\\n // here entries will contain only those divs whose size has changed.\\n entries.forEach(entry => {\\n console.log(`Element resized:`, entry.target);\\n console.log(`New size: ${entry.contentRect.width}x${entry.contentRect.height}`);\\n });\\n});\\n\\nmultipleDivs.forEach((d) => resizeObserver.observe(d))\\n\\n
ResizeObserver
is supported by all major browsers.
The most common method for copying data from a web app to the system clipboard is to use the good old document.execCommand(\'copy\')
. While this works, as web applications have become more complex and advanced, we’ve started to see the limitations of this approach. For starters, it only supports copying text to the clipboard — there is no support for pasting. It also doesn’t require users’ permission, and most importantly, it has been deprecated and is not recommended for new projects:
So, what’s the alternative? The brand new feature-rich Clipboard API!
\\nThe Clipboard API was introduced to provide programmatic access to the system clipboard. Because the clipboard is a system resource that is outside the control of the browser, the browser prompts the user, asking for permission to access the clipboard data. Also, it is mandatory to serve the webpage over HTTPS when using the Clipboard API in a non-local environment. The Clipboard API is fairly new but has been adopted by all major browsers and has been available in all browsers since June 2024.
\\nNow, let’s see how we can use the Clipboard API to copy and paste text and images:
\\n<!DOCTYPE html>\\n<html lang=\\"en\\">\\n <head>\\n <meta charset=\\"UTF-8\\" />\\n <title>Clipboard API Example</title>\\n </head>\\n <body>\\n <h2>📋 Clipboard API Demo</h2>\\n <button id=\\"copyTextBtn\\">Copy Text to Clipboard</button>\\n <div class=\\"playground\\">\\n <div id=\\"copy-from\\">This will be copied to the clipboard!</div>\\n <div id=\\"copy-to\\"><textarea></textarea></div>\\n </div>\\n\\n <img id=\\"copyImage\\" src=\\"./sample1.png\\" alt=\\"Sample\\" />\\n <button id=\\"copyImageBtn\\">Copy Image to Clipboard</button>\\n <script>\\n // Copy Text\\n document\\n .getElementById(\\"copyTextBtn\\")\\n .addEventListener(\\"click\\", async () => {\\n try {\\n const e = document.getElementById(\\"copy-from\\");\\n await navigator.clipboard.writeText(e.innerText);\\n alert(\\"Text copied to clipboard!\\");\\n } catch (err) {\\n console.error(\\"Text copy failed:\\", err);\\n alert(\\"Failed to copy text.\\");\\n }\\n });\\n // Trigger copy event\\n document.addEventListener(\\"copy\\", () =>\\n alert(\\"Something was copied to the system clipboard\\")\\n );\\n // Trigger paste event\\n document.addEventListener(\\"paste\\", () =>\\n alert(\\"Something was pasted from the system clipboard\\")\\n );\\n // Copy Image\\n document\\n .getElementById(\\"copyImageBtn\\")\\n .addEventListener(\\"click\\", async () => {\\n const image = document.getElementById(\\"copyImage\\");\\n try {\\n const response = await fetch(image.src);\\n const blob = await response.blob();\\n const clipboardItem = new ClipboardItem({ [blob.type]: blob });\\n await navigator.clipboard.write([clipboardItem]);\\n alert(\\"Image copied to clipboard!\\");\\n } catch (err) {\\n console.error(\\"Image copy failed:\\", err);\\n alert(\\"Failed to copy image.\\");\\n }\\n });\\n </script>\\n </body>\\n</html>\\n\\n
Let’s focus on the JavaScript written in the script
tag. First, we use the writeText
API on the Clipboard object to write contents to the system clipboard. This is similar to the execCommand(\'copy\')
approach. We also add an event listener that runs when something is copied from the browser window. Remember, in the callback function, the data copied to the clipboard can’t be read; it can only be modified.
For copying images, we fetch the image from the link in the src
attribute of the image
tag, convert it to a blob, and then write it to the system clipboard. Additionally, we add another event listener that is triggered when something is pasted in the browser window.
In this article, we explored a few hidden gems in the JavaScript API:
\\nstructuredClone
: Allows deep cloning of JSON objectsAbortController
: Allows you to cancel already sent XHR requestsResizeObserver
: Allows developers to run code when an element changes its sizeThese native browser APIs unlock a wide range of use cases — from deep cloning complex objects and observing element size or visibility to handling clipboard operations and color picking — all without reaching for external packages. By understanding and leveraging these tools, you can build faster, more efficient, and cleaner web applications. And the best part? They’re all built right into the browser.
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nnext.config.js
\\n In this post, you’ll learn how to manage environment variables in Next.js using .env
files. We’ll cover public vs. private variables, environmental variables file hierarchy, runtime limitations, and best practices for secure configuration in development and production.
Environment variables in Next.js are runtime configuration values defined outside the source code and injected into the application during build or runtime. They are loaded from .env
files or the system environment and accessed using process.env
.
Next.js supports multiple environment files: .env
, .env.local
, .env.development
, .env.production
, and .env.test
. These files are evaluated based on the current environment mode (NODE_ENV
), and Next.js prioritizes .env.local
for local overrides.
Next.js restricts which environment variables are exposed to the browser. Only variables prefixed with NEXT_PUBLIC_
are embedded in the client-side bundle. Variables without this prefix remain server-only, accessible exclusively on the Node.js backend. This division allows sensitive data like database credentials to stay on the server, while public-facing configuration like feature flags or API base URLs can be exposed to the frontend safely.
Environment variables are available in both server-side code (API routes, getServerSideProps
, getStaticProps
) and client-side code (via NEXT_PUBLIC_*
). However, any variable used in client-side code must be prefixed appropriately, or it won’t be included in the compiled JavaScript sent to the browser.
Editor’s note: This post was updated by Muhammed Ali in April 2025 to account for Next.js 15+ updates, explain the importance of environmental variables, and provide best practices for managing environmental variables.
\\nEnvironment variables allow separation between code and configuration. This ensures that the same codebase can run in multiple environments (development, staging, production) without changes to source files. Instead of hardcoding base URLs, API keys, feature toggles, or analytics tokens, they are abstracted through environment variables that are injected at build time or runtime, depending on the deployment setup.
\\nThis separation becomes important in CI/CD pipelines. During the build process, the .env.production
file or runtime variables set in the cloud platform define values such as the backend service URL, feature flags, and third-party service credentials. Modifying these values doesn’t require rebuilding the code unless the variable is needed at build time (like those used during static generation).
For variables only used at runtime, the application can remain unchanged and adapt based on the deployed environment, especially with dynamic routes using getServerSideProps
.
Environment variables consist of name-value pairs, like in the example below:
\\nAPI_KEY=1234\\nDB_CONN_STRING=\\"http://localhost:3000\\"\\n\\n
An application can have different requirements when running in development, testing, and production environments. Instead of having different codebases for the same application, where each codebase is tailored towards a specific environment, an application can have a single codebase whose behavior in the different environments is determined using the environment variable settings.
\\nThe application can read the environment variable settings and modify its behavior to meet the requirements of that specific environment. This keeps the codebase short, clean, and organized:
\\nprocess.env.API_KEY;\\nprocess.env.DB_CONN_STRING;\\n\\n
In a development environment, tools for code formatting, linting, and testing are essential.
\\nHowever, these tools are not necessary in the production stage. When working with tools such as npm, setting the value of the built-in NODE_ENV
environment variable to production
will install dependencies that the application needs in production, excluding those needed in the development stage. This behavior is configurable if necessary.
Similarly, during development, you may need to set up a local instance of your database server separate from the production server. You will have to use environment variables to pick the correct database connection string.
\\nUsually, modifying an environment variable modifies the behavior of an application without requiring you to rebuild or redeploy the application. But in frameworks like Next.js, some environment variables are hardcoded in the codebase at build time. You may need to build and redeploy after modifying an environment variable.
\\nAs mentioned earlier, Next.js has built-in support for environment variables. To start using them, you can declare .env.local
at the root of your project directory:
API_KEY=123456789\\nUSERNAME=username\\nPASSWORD=password\\n\\n
Next.js will load the above environment variables into the process.env
object out of the box so that you can access them in the Node.js environment:
export async function getStaticProps() {\\n console.log(process.env.API_KEY); \\n console.log(process.env.USERNAME);\\n console.log(process.env.PASSWORD);\\n}\\n\\n
The above environment variables are only available in the Node.js environment and are referred to as private environment variables. Prefixing the variable name with NEXT_PUBLIC_
turns them into public environment variables:
NEXT_PUBLIC_API_KEY=123456789\\nNEXT_PUBLIC_USERNAME=username\\nNEXT_PUBLIC_PASSWORD=password\\n\\n
Public environment variables are available both in the browser and Node.js environments. We will explore public and private environment variables later in this article.
\\n\\nIn addition to the .env.local
file, Next.js also gives you the flexibility to store constants and default values in .env
, env.development
, and env.production
environment files. These files are not meant for storing secrets, but rather for environment-specific configurations and settings.
As its name suggests, the env.development
file is for environment variables you want to use only in development. In Next.js, you can launch the development server using the next dev
command.
The variables you declare in the env.development
file won’t be available in production.
\\nThe env.production
file, on the other hand, is for variables that you want to use only in production. The next start
command builds the project and launches the production environment. The variables you declare in the env.production
file won’t be available in the development environment.
The variables you declare in the .env
file will, however, be available in both development and production environments. Be aware that an environment variable that you look up dynamically, such as in the example below, will not be hardcoded into your production build as we just described:
const variableName = \'BASE_URL\';\\nconsole.log(process.env[variableName]);\\n\\nconst envir = process.env;\\nconsole.log(envir.BASE_URL);\\n\\n
You can use this sample application to try out the examples in this article. Follow the steps below to clone the repository to your machine and experiment with the environment variables in this article.
\\nFirst, clone a sample application. Use git clone
to clone the sample application to your machine. Then, install dependencies by running the npm install
command.
There are several environment variable files at the root of your project directory. In Next.js, you only store secrets in the env.local
file. Because environment variables holding secrets should not be shared, there is a env.local.template
file you can use to create your env.local
file.
To add secrets, create a env.local
file at the root of your project repository and copy the contents of env.local.template
into it. You can add any secrets you don’t want to expose in a version control system in the env.local
file.
Next, you can launch the development server using the npm run dev
command. The environment variables in the .env.development
file are available only in the development environment. On the other hand, you can build the project and launch the production server using the npm run build
and npm run start
commands. The environment variables in the .env.production
file are available only in the production environment.
Now, the environment variables in the .env
file are available in both the development and production environments.
As we introduced earlier, Next.js environment variables can be categorized into public and private environment variables, but they are private by default.
\\nPrivate environment variables are only accessible from the Node.js environment. You can declare private environment variables in any of your .env*
files like so:
ENV_VARIABLE=env_variable\\n\\n
To make an environment variable public, prefix its name with NEXT_PUBLIC_
as in the example below:
NEXT_PUBLIC_ENV_VARIABLE=nex_public_env_variable\\n\\n
At build time, Next.js will access and replace references to the public environment variables with their actual values in the codebase bundled for the browser environment.
\\nTherefore, in the example below, process.env.NEXT_PUBLIC_ENV_VARIABLE
will be replaced in line with the actual value of the environment variable at build time:
<p>{process.env.NEXT_PUBLIC_ENV_VARIABLE}</p>\\n\\n
Public environment variables are mostly used for storing constants or default values that you don’t want to import or declare in multiple files. You can declare such variables once in a .env*
file, as in the example above, and access them from anywhere in your codebase. They are available both in the browser and in Node.js.
The requirements of an application while in development may not be the same as those in production. Therefore, some environment variables may only be required in development, others in production, and some in both environments.
\\nFor example, if you declare the environment variables below in the .env.production
file, they will be available only in the production environment:
BASE_URL=\'http://localhost:3000\'\\nNEXT_PUBLIC_BASE_URL=\'http://localhost:3000\'\\n\\n
Regardless of the file in which you declare the above environment variables, the BASE_URL
environment variable will be available in the Node.js environment because it is a private variable. And NEXT_PUBLIC_BASE_URL
will be available in both the browser and Node.js environments because it is a public environment variable.
next.config.js
Next.js offers flexible ways of working with environment variables. You can use the legacy env
property of the next.config.js
file to configure environment variables, or the newer, more intuitive, and ergonomic .env*
files described above.
You can declare the environment variable as a property of the env
object in the next.config.js
file and access it in your codebase as a property of the process.env
object:
module.exports = {\\n env: {\\n BASE_URL: \\"http://localhost:3000\\",\\n },\\n};\\n\\n
Be aware that using the next.config.js
file for your environment variables will inline the values of the variables at build time and bundle them in your frontend code, irrespective of whether you prefix the variable name with NEXT_PUBLIC_
or not:
export const App = () => {\\n return <p>{process.env.BASE_URL}</p>\\n}\\n\\n
Next.js will evaluate all references to environment variables and hardcode them in your client code at build time. After that, all the evaluated values won’t respond to changes in the environment variable.
\\nIf your frontend application needs access to dynamic environment variables at runtime, you should set up your own API and provide the variables to your client code.
\\nSimilarly, dynamically looking up or destructuring environment variables in your frontend code, as in the example below, won’t work. The value of the environment value will be undefined
. Always access your environment variables using process.env.ENV_VARIABLE_NAME
on the client side:
const varName = \\"BASE_URL\\";\\nconsole.log(`${varName} = ${process.env[varName]}`);\\n\\nconst { BASE_URL } = process.env;\\nconsole.log(`${varName} = ${BASE_URL}`);\\n\\n
As explained above, you can declare environment variables in different files. These files are for environment variables that are usually made available in specific environments.
\\nNext.js follows the order below when looking for an environment variable and stops after finding the variable it needs. The value of NODE_ENV
is set to development
in a development environment, production
in a production environment, and test
in a test environment:
process.env\\nenv.${NODE_ENV}.local\\nenv.local (env.local is not checked when NODE_ENV is test)\\nenv.${NODE_ENV}\\n.env\\n\\n
If you declare the same environment variable in multiple files, the environment variable declared in the lowest file in the above list wins.
\\nIn Next.js, you can create an environment variable by referencing or composing other environment variables using the ${VARIABLE_NAME}
syntax. Next.js will expand and replace any referenced variable with its actual value.
The example below uses the BASE_URL
and API_KEY
environment variables to create API_URL
. The value of the API_URL
variable will evaluate to https://www.logrocket.com/api/v1?apiKey=12345
:
BASE_URL=\\"https://www.logrocket.com\\"\\nAPI_KEY=12345\\nAPI_URL=\\"${BASE_URL}/api/v1?apiKey=${API_KEY}\\"\\n\\n
Such a composition avoids repetition and keeps your environment variable files organized. You can now reference each environment variable independently from your codebase:
\\nconsole.log(process.env.BASE_URL);\\nconsole.log(process.env.API_KEY);\\nconsole.log(process.env.API_URL);\\n\\n
Cross-file environment variable referencing is also possible in Next.js. You can reference an environment variable declared in the env.local
file from the .env
file. However, you should pay attention to the order of the environment variable lookup as highlighted above if you have variables with similar names in different files.
Referencing a variable declared in .env.development
from .env.production
and vice versa won’t work because those environment variables will only be available in their respective environments.
Managing environment variables poorly can lead to misconfigurations, security leaks, or inconsistent behavior across environments. Next.js supports multiple .env
files and variable scoping rules, but it’s up to the developer to enforce structure and safety. The following practices help maintain clarity, prevent accidental exposure, and ensure reliable application behavior in every environment.
.env.local
for local-only secrets, and never commit it to version control.env.local
is intended for environment-specific overrides that should not be shared, such as local database credentials, test API tokens, or private keys. This file is ignored by default in .gitignore
. Keeping sensitive data in .env.local
ensures each developer can configure their environment without risking exposure in commits or pull requests.
NEXT_PUBLIC_
Variables prefixed with NEXT_PUBLIC_
are embedded into the client-side bundle. Only expose values that are explicitly safe for the browser. Avoid overusing this prefix; leaking internal service URLs, tokens, or feature gates into the client increases the attack surface and creates coupling between server and client logic.
Relying on the implicit presence of environment variables increases the chance of silent failures. Instead, define a schema using libraries like zod
, joi
, or env-var
, and validate all required variables at application boot. This helps catch configuration errors early during deployment or local development.
Do not use if (process.env.NODE_ENV === \'production\')
as the primary switch for behavior. Instead, use explicit environment variables that define the behavior, such as FEATURE_X_ENABLED=true
or USE_CACHE_LAYER=false
. This makes the codebase more readable and decouples environment control from build stages.
Environment variables influence how an application runs or behaves in different contexts and environments. Next.js, like most web frameworks, has the necessary setup for flexibly configuring and using environment variables in your application.
\\nIn this article, we explored the differences between private and public environment variables in Next.js. We also learned about functionalities that allow you to declare development-only and production-only environment variables, such as the .env.development
and env.production
files.
Finally, we learned that in a typical Next.js project, you only need the .env.local
file to store credentials that you want to keep secret. Always add the .env.local
file to your .gitignore
file to avoid exposing your secrets in a version control system like Git.
Happy coding!
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nuseActionState
?\\n useActionState
\\n useActionState
: Practical examples\\n useActionState
Hooks\\n Managing form state in React has never been the most fun part of building apps. Between tracking input values, handling async submissions, showing loading spinners, and dealing with errors, things can get messy fast. You usually end up juggling useState
, useEffect
, and a bunch of extra logic just to make a simple form work smoothly.
That’s where useActionState
comes in. It’s a handy little Hook from React that makes it way easier to handle user actions, especially things like form submissions and async state changes. Instead of writing tons of boilerplate, you get a cleaner, more predictable way to manage it all.
In this guide, we’ll walk through how useActionState
works, when to use it, and share a bunch of examples so you can see it in action.
Let’s dive in and make your favorite app (and your life) much simpler.
\\nuseActionState
?At a high level, useActionState
is a React Hook that ties a user action (like submitting a form) to a piece of state. It takes care of updating that state based on what happens when the action runs.
Here’s what the basic usage looks like:
\\nconst [state, formAction, isPending] = useActionState(actionFn, initialState);\\n\\n
Here’s how the arguments work:
\\nactionFn
— Your function that runs when the user submits the form or clicks on the button. It receives the current state as the first argument, followed by the usual form data. This makes it easy to perform stateful updates based on previous resultsinitialState
— Sets the starting state before any submissions. This can be any serializable value, like an object, string, or numberWhat useActionState
returns:
state
— The current state returned from your action. Initially, it uses initialState
, and then updates with the result of each form submissionformAction
— Passed directly to your <form action={formAction}>
. This is what ties your form to the logic inside actionFn
isPending
— A boolean that’s true
while the action is running. It’s perfect for showing loading spinners or disabling buttons during submissionThis Hook is especially handy for forms, where you often need to juggle a lot—submitting data, validating inputs, showing feedback messages, and handling errors. Instead of wiring up all of that manually, useActionState
gives you a cleaner, more streamlined way to manage it.
useActionState
useEffect
HooksuseActionState
: Practical examplesLet’s start with something basic — a counter app. Normally, you’d reach for useState
to manage the count, but useActionState
offers a cleaner path, especially when your updates involve async behavior (like writing to a server or database).
Here’s a minimal example to illustrate how it works:
\\n\\"use client\\";\\n\\nimport { useActionState } from \'react\';\\n\\nasync function increment(prevCount) {\\n await new Promise(resolve => setTimeout(resolve, 1000)); // Simulate async delay\\n return prevCount + 1;\\n}\\n\\nfunction CounterApp() {\\n const [count, formAction, isPending] = useActionState(increment, 0);\\n\\n return (\\n <form action={formAction}>\\n <p>Count: {count}</p>\\n <button disabled={isPending}>\\n {isPending ? \'Incrementing...\' : \'Increment\'}\\n </button>\\n </form>\\n );\\n}\\n\\nexport default CounterApp;\\n\\n
In this example, useActionState
handles the entire state update process for the counter. The increment
function simulates an async operation, like fetching new data or updating a value on the server, and returns the updated count.
Inside the component, we get three values from the Hook: count
to display, formAction
to plug into the <form>
, and isPending
to disable the button while the update is in progress.
What’s great is that we don’t need to manually manage loading state or write extra logic to track updates. useActionState
takes care of it for us. It keeps the component simple even when working with async operations.
Now let’s take it a step further. Beyond simple counters, useActionState
really shines in real-world scenarios like handling form submissions. In real-world apps, form submissions often involve async operations like API calls, along with loading states and user feedback. useActionState
lets us manage all of that in a clean, declarative way.
In the example below, we simulate a form submission with a delay and display a success message when it’s done—all without any extra state or effects:
\\n\\"use client\\";\\n\\nimport { useActionState } from \\"react\\";\\n\\n// submit form action\\nasync function submitForm(prevState, formData) {\\n await new Promise((resolve) => setTimeout(resolve, 1500));\\n const email = formData.get(\\"email\\");\\n if (!email || !email.includes(\\"@\\")) {\\n return { success: false, message: \\"Please enter a valid email address.\\" };\\n }\\n return { success: true, message: \\"Form submitted successfully!\\" };\\n}\\n\\nfunction FormApp() {\\n const [state, formAction, isPending] = useActionState(submitForm, {\\n success: null,\\n message: \\"\\",\\n });\\n\\n return (\\n <div className=\\"form-container\\">\\n <div className=\\"form-card\\">\\n <form action={formAction}>\\n <input\\n className=\\"form-input\\"\\n type=\\"text\\"\\n name=\\"name\\"\\n placeholder=\\"Name\\"\\n />\\n <input\\n className=\\"form-input\\"\\n type=\\"email\\"\\n name=\\"email\\"\\n placeholder=\\"Email\\"\\n />\\n <button className=\\"form-button\\" disabled={isPending}>\\n {isPending ? \\"Submitting...\\" : \\"Submit\\"}\\n </button>\\n {state.message && (\\n <p\\n className={`form-message ${state.success ? \\"success\\" : \\"error\\"}`}\\n >\\n {state.message}\\n </p>\\n )}\\n </form>\\n </div>\\n </div>\\n );\\n}\\n\\nexport default FormApp;\\n\\n
In this case, we’re dealing with a classic form submission — something every app needs. But instead of juggling multiple state variables for loading, success, and error handling,
useActionState
simplifies it into a single Hook. The result is a cleaner, more readable form component with less boilerplate to maintain.
In this example, we’ll see how to pair Server Functions with useActionState
to build a like button component without any local state management or effect Hooks:
\\"use client\\";\\n\\nimport { useActionState } from \\"react\\";\\nimport { toggleLike } from \\"../actions\\";\\n\\nfunction LikeButton({ initialLiked }) {\\n const [liked, formAction] = useActionState(toggleLike, false);\\n return (\\n <form action={formAction} className=\\"like-container\\">\\n <button className=\\"like-button\\">{liked ? \\"❤️ Liked\\" : \\"♡ Like\\"}</button>\\n </form>\\n );\\n}\\n\\nexport default LikeButton;\\n\\n\\n// actions.ts\\n\\n\\"use server\\";\\n// Simulate DB update or external call\\nexport async function toggleLike(prevLiked) {\\n await new Promise((resolve) => setTimeout(resolve, 1000));\\n return !prevLiked;\\n}\\n\\n
The toggleLike
function runs on the server and simply flips the like state. On the client side, useActionState
helps us wire it up neatly by handling the async interaction and re-rendering based on the latest state:
It’s a small UI pattern, but this example shows how powerful the combo of Server Functions and useActionState
can be — clean, minimal, and no extra boilerplate.
useActionState
HooksSo far, we’ve seen how useActionState
can simplify a single interaction, like submitting a form or toggling a like button. But what happens when you have multiple independent actions on the same component?
Let’s look at a real-world example: a social post UI where users can both like and follow. Each action has its own async logic, but with useActionState
, managing them side by side is simple and clean — no messy state or loading flags scattered all over:
\\"use client\\";\\n\\nimport { useActionState } from \\"react\\";\\n\\nimport { toggleLike, toggleFollow } from \\"../actions\\";\\n\\nfunction SocialActions() {\\n const [liked, likeAction] = useActionState(toggleLike, false);\\n const [following, followAction] = useActionState(toggleFollow, false);\\n\\n return (\\n <div className=\\"social-actions\\">\\n <form action={likeAction}>\\n <button className=\\"like-button\\">\\n {liked ? \\"❤️ Liked\\" : \\"♡ Like\\"}\\n </button>\\n </form>\\n\\n <form action={followAction}>\\n <button className=\\"follow-button\\">\\n {following ? \\"✔ Following\\" : \\"+ Follow\\"}\\n </button>\\n </form>\\n </div>\\n );\\n}\\n\\nexport default SocialActions;\\n\\n\\n// actions.ts\\n\\n\\"use server\\";\\n\\nexport async function toggleLike(prevLiked: boolean) {\\n await new Promise((res) => setTimeout(res, 800));\\n return !prevLiked;\\n}\\n\\nexport async function toggleFollow(prevFollowing: boolean) {\\n await new Promise((res) => setTimeout(res, 1000));\\n return !prevFollowing;\\n}\\n\\n
Each button in this example is wired to its own Server Function and keeps its state isolated. There’s no need to juggle useState
or track loading states manually — useActionState
handles it all in a neat, declarative way.
useActionState
is one of those Hooks that quietly makes your UI logic easier, especially when you’re dealing with async flows like form submissions or server interactions. It also lets you pair state updates directly with Server Actions, so you don’t need to juggle multiple useState
, useEffect
, or loading/error flags.
If you’ve ever felt like you’re writing too much code to manage state transitions or loading indicators, give this Hook a try. It’s a small shift in mindset, but one that can make your codebase cleaner, more maintainable, and more fun to work with.
\\n\\nHappy coding!
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nWhen discussing 3D web development, Three.js and Babylon.js are two libraries that tend to dominate the conversation. While both libraries support creating 3D experiences in the browser, there are a few key differences between them.
\\nIn this article, we’ll compare Three.js with Babylon.js. We’ll cover their background, how to get started with each, their core features, and key differences. In addition, we’ll provide an overview of where each library might be most useful.
\\nWe’ll explore both libraries in depth in this article, but I imagine some of you are looking for a short answer. Here’s the TL;DR:
\\nThree.js is a lightweight rendering engine; it gives you a lot of control and integrates easily with other web frameworks. However, you’ll often need third-party add-ons or custom code to handle things like physics, animation systems, or complex interactions.
\\nBabylon.js, on the other hand, is a complete 3D engine. It comes with built-in systems for physics, animations, GUI, and most of the functionality you’d need right out of the box. With that in mind, it’s clear these two engines take very different approaches, and the rest of this article will break down those differences in detail.
\\nThree.js was created in 2010 by Ricardo Cabello, a designer and developer who wanted to make it easier to work with WebGL. WebGL is powerful but super low-level; it’s like writing assembly for 3D graphics. Cabello’s idea was to build a higher-level API that hides all the WebGL boilerplate.
\\nFrom there, Three.js started as a simple rendering engine that could draw things like cubes and spheres on a canvas. Over time, other developers jumped in and helped make it more modular and powerful. As of this writing, Three.js has over 35,000 stars on GitHub.
\\nGetting started with Three.js is pretty straightforward; you only need to include its CDN in your bare HTML file, like below:
\\n<script src=\\"https://cdnjs.cloudflare.com/ajax/libs/three.js/0.174.0/three.tsl.js\\"></script>\\n
Or install it via npm:
\\nnpm install three\\n
Then import it into your project:
\\nimport * as THREE from \'three\';\\n
Once installed, you can start importing and using its features in your application. Below are some of the core ones worth exploring.
\\nIn addition to its main WebGL rendering ability that lets you display 3D graphics inside a browser canvas, Three.js provides a couple of higher-level tools out of the box. Some of them include:
\\nThat covers the basics of Three.js. Now, we’ll look at Babylon.js and then get into how they compare in terms of features and performance.
\\nBabylon.js was started in 2013 by David Catuhe, a Microsoft engineer. It came a few years after Three.js but with a different philosophy. While Three.js focused on being a lightweight WebGL wrapper, Babylon.js went straight for the full-game engine route. Out of the box, it comes with a full physics system (via plugins like Cannon.js, Ammo.js, or Oimo.js), a powerful PBR (physically based rendering) pipeline, support for VR/AR, and a ton of editor and tooling support.
\\nAnother important factor is that Babylon.js is officially backed by Microsoft, which means more stability and active development. At the time of writing, Babylon.js has around 3.5 thousand stars on GitHub.
\\nGetting started with Babylon.js is also easy. You can include it directly from a CDN:
\\n<script src=\\"https://cdn.babylonjs.com/babylon.js\\"></script>\\n
Or install it via npm:
\\nnpm install babylonjs\\n
Then import what you need for your project:
\\nimport { Scene, Engine } from \'babylonjs\';\\n
With this, you can start using Babylon’s high-level functions to build interactive 3D scenes. Let’s go over some of the most important ones.
\\nBabylon.js comes pre-packed with features. A lot of integration you’d normally need third-party libraries for in other engines is built right in. Here are some of its biggest ones:
\\nBefore comparing them, it’s important to address that Three.js and Babylon.js are not exactly solving the same problem, so a full-blown comparison might not be fair.
\\nThree.js is more of a lightweight rendering engine. Babylon.js, on the other hand, leans toward being a full-fledged 3D engine. That said, it’s only fair to compare them when you’re picking a tool for a specific kind of job and want to decide between flexibility vs. features or minimalism vs. tooling support.
\\nWith that in mind, here’s how both library generally compares:
\\nFeature | \\nThree.js | \\nBabylon.js | \\n
---|---|---|
Ease of Use | \\nMinimal API surface, very hands-on. You wire things together yourself. Great for devs who want control | \\nMore guided and structured. Built-in helpers, scene system, and GUI make it easier to get started fast | \\n
Rendering Capabilities | \\nStrong WebGL renderer with PBR support. Very customizable; you can build your own render loop, post-processing, etc | \\nHigh-quality PBR out of the box, HDR, shadows, glow layers, and a full scenegraph system. Visual fidelity is great with less setup | \\n
Physics Engine | \\nNot included. You choose and integrate one yourself (like Cannon.js or Ammo.js) | \\nBuilt-in integration for Cannon.js, Ammo.js, and Oimo.js. Setup is simpler and more consistent | \\n
Animation System | \\nFully featured but low-level. You can animate anything with custom code or the AnimationMixer | \\nBuilt-in system supports skeletal, morph, and property animations. Also has animation blending and a visual editor via the Inspector | \\n
Interactivity & UI | \\nNo built-in UI system. You’d typically use DOM overlays or external libraries like dat.GUI or custom HTML | \\nComes with a full 2D GUI framework built into the engine e.g buttons, sliders, HUDs are all drawn in the WebGL canvas | \\n
WebXR & VR/AR Support | \\nHas support via the webxr manager, but you’ll be doing some manual setup | \\nWebXR is first-class. AR/VR setup is simpler, and Babylon even has helper scenes for XR experiences | \\n
Community & Documentation | \\nLarger community, more tutorials, more third-party examples and demos. Devs use it across many types of web projects | \\nSmaller community but great official docs. Backed by Microsoft, with consistent updates and solid long-term support | \\n
In terms of usage, Three.js leads with over 1.8 million weekly downloads, compared to Babylon.js, which sees around 11 thousand. The graph below comes from npm trends:
\\nA major reason for this is how easy it is to get started with Three.js and how much it has grown on developers over time. Also, the recent trend of developers vibe coding 3D web games with AI earlier this year likely contributed to the recent spike in downloads.
Per Bundlephobia, the minified + gzipped size of the latest version of Three.js (v0.175.0) is around 168.4 kB, while Babylon.js (v8.1.1) comes in at about 1.4 MB. This size difference makes sense since Babylon.js ships with more built-in features. Also, Babylon.js is modular, which means you can reduce the size by importing only what you need instead of pulling in the entire library. However, size alone doesn’t tell the whole performance story.
\\nTo make a fair comparison, we’ll render the same .glb
model in both engines and measure their runtime performance – specifically FPS over time, initial load time, and general responsiveness.
I’ve put together two sample projects for this comparison. You can check them out via this GitHub repo. Each project loads the same .glb
model, sets up basic lighting and a camera, and shows live FPS in the corner.
After running the test with a 19 MB Shipwreck turned into hideout model, here’s how the performance looked:
\\nFrom this test, Three.js delivered slightly lower FPS and load time, which makes sense given its minimal setup. Babylon.js was marginally heavier but offered more stability during user interaction.
\\nChoosing between Three.js and Babylon.js mostly depends on what you’re building and how much control you want. The following breakdown can help you guide that decision:
\\nChoose Three.js if:
\\nChoose Babylon.js if:
\\nThree.js is mostly ideal if you want a lot of control and need to integrate smoothly with other web tools or frameworks. Babylon.js, on the other hand, is a better fit when you’d rather have more built-in functionality and don’t mind working within the structure the engine provides.
\\nThroughout this article, we’ve explored the major differences between Three.js and Babylon.js. We’ve looked at how both libraries started, how to integrate them into your application, and their core features. We also compared their feature sets, usage over the years, performance, and where each library is best suited for different use cases.
\\nThanks for reading!
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nIn developer circles today, AI dominates the conversation. In fact, developer assistance has become a primary benchmark for evaluating leading AI tools. Frontend development is evolving rapidly, with AI driving this transformation.
\\nYet here’s the hard truth: most developers are using AI wrong. They either depend on it uncritically — generating messy code — or avoid it completely, missing significant productivity opportunities.
\\nThe 2025 State of Web Dev AI survey reveals a striking pattern: while 82% of frontend developers have experimented with AI tools, only 36% have successfully incorporated them into their daily workflows. For frontend developers, AI should function as more than a sophisticated autocomplete; it should be a force multiplier that helps you build better UIs faster while maintaining clean, sustainable code.
\\nBefore we proceed any further, the quick answer to this is no.
\\nStill, not all frontend developers are eager to embrace AI tools — and that’s completely understandable. Despite the advancements these assistants offer, hesitation often stems from valid concerns about reliability, control, and long-term skill development.
\\nSome engineers may warn against using AI tools, arguing they’ll make you overly reliant and erode your core skills. But the truth is, not long ago, we were all digging through GitHub issues, Stack Overflow threads, and niche forums just to find a working solution.
\\nThe performance gains from AI are real — they just need to be used thoughtfully to avoid introducing unnecessary complexity. While I don’t believe tools like LLMs will replace frontend engineers anytime soon, I do think the role is shifting. Developers may soon spend less time worrying about framework syntax and more time focusing on high-level problem solving and delivering reliable, high-quality code.
\\nAI coding assistants are tools that use advanced machine learning algorithms and data to enhance your coding process by providing features like intelligent code completion, code suggestions, and error detection. They can also generate entire code snippets, hence saving time on repetitive tasks and automating boring routines in our coding jobs, allowing us to focus more on delivering value to end users quicker.
\\nOver time, these tools have evolved from merely spitting out code based on prompts to being fully integrated into the actual coding experience, including brainstorming and debugging, as well as giving contextual insights to guide frontend engineers in their process of creating exceptional and intuitive web applications.
\\nAI assistants can help generate UI components, layout structures, and design suggestions from simple prompts. This speeds up prototyping, enabling developers and designers to test and iterate on ideas more quickly.
\\nIt’s never been easier to write code. Modern AI tools use contextual awareness to suggest the next line of code, generate entire code blocks, and adapt to your coding patterns and codebase — streamlining development like never before.
\\nAI-powered assistants can review pull requests, recommend improvements, and help enforce coding standards. This enhances team collaboration and reduces the review load on senior frontend developers.
\\nAutomation is one of AI’s biggest strengths. In frontend development, AI assistants can automatically detect errors, suggest fixes, and even generate tests as you code — making debugging faster and more efficient.
\\nAI assistants act like an extra set of hands right in your editor. For teams with tight budgets, they can reduce the need for additional developers by boosting productivity without increasing headcount.
\\n\\nIn this section, we’ll explore how different types of AI tools can support various stages of app development. From code completion and generation to AI-powered editors, design-to-code platforms, quality assurance, security, and collaboration tools — each category brings unique value to your workflow. You can also check out this article for a more comprehensive list of AI coding tools to integrate into your workflow.
\\nDeveloped by GitHub and OpenAI, GitHub Copilot intelligently suggests code snippets and entire blocks as you type, using context from your comments, file structure, and existing code. It integrates seamlessly with VS Code and other popular IDEs through its extension.
\\n\\n
Imagine you’re developing a React application and need to quickly scaffold a responsive navigation bar.
\\n// Build a responsive navbar with Tailwind CSS that collapses on mobile
Note: Make use of Copilot whenever you need some deeper clarification about design tradeoffs. You should ask questions like “How can I improve accessibility for this navbar?” to receive context‑specific recommendations.
\\nThis is an AI-powered IDE inspired by VS Code. Unlike traditional code editors, Cursor doesn’t just autocomplete your code based on patterns. Instead, it leverages deep code analysis (via transformer models and AST parsing) to understand the entire semantics of your project.
\\nThis means it can analyze dependencies, comprehend complex logic, and even predict how changes in one part of your code might affect the whole system:
\\nWith your project indexed, open Cursor’s command palette and type a detailed instruction such as:
\\nRefactor all callback-based functions in the user authentication module to async/await, ensuring error handling remains intact.\\n\\n
Once this command is run, Cursor will scan all relevant files, analyze dependencies, and propose bulk changes.
\\nNote: Use Cursor’s agent mode for your bulk code refactoring. It automates multi‑file operations and helps to reduce manual effort and human error.
\\nDeepCode AI, now an integral part of Snyk, an AppSec solution for developers and security teams, detects security flaws in your codebase. It runs an in-depth static analysis and compares your code against millions of best-practice patterns. Through this process, DeepCode pinpoints issues like injection risks, improper error handling, and potential performance bottlenecks:
\\nWhen building frontend apps that handle sensitive data, integrating DeepCode AI into your CI/CD pipeline can boost security with automated code reviews. While it’s a powerful tool, it may produce false positives — so manual review is still recommended. Keep in mind that DeepCode focuses on code analysis, so it doesn’t offer real-time autocomplete or code generation features.
\\nCodeParrot AI is a VS Code extension that converts design inputs — like Figma files or screenshots — into clean, maintainable, production-ready components across various frontend frameworks. It’s especially useful for building web and mobile apps, crafting landing pages, and generating HTML emails:
\\nThis tool is particularly valuable for frontend teams that need to ship quickly and can’t afford to start from scratch. With CodeParrot AI, you can also specify coding standards such as style guidelines and naming conventions to ensure overall consistency.
\\nHead over to Figma and select the component you want to create. Right-click and select Copy/Paste as, then click on Copy link to selection. This will give you the link to the particular component:
\\n
Now, click on the Figma icon, paste the Figma link there, and submit:
\\nAfter a few minutes, a preview will be shown and then you can request for the code.
\\nThe integration of AI for frontend development is a genuine game-changer, opening up innovative approaches that were previously unexplored. Frontend AI tools like CodeParrot represent a significant advancement in how we approach our daily development tasks.
\\nHowever, I can’t emphasize enough that these frontend AI tools should be viewed as assistants rather than replacements. Attempting to substitute human developers with AI might lead to serious technical issues that ultimately require more time to fix, increase costs, and potentially result in inconsistent frontend software.
\\nWhen implemented thoughtfully within your workflow, frontend AI can dramatically enhance productivity without sacrificing quality. The key is finding the right balance between leveraging these powerful tools and applying your irreplaceable human expertise.
\\nHappy coding!
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nImage
\\n Image
component\\n Image
Component\\n next/image
\\n The Next Image
Component is the built-in image optimization solution for Next.js. It extends the native HTML <img>
element with built-in features for automatic image optimization.
Image
offers image optimization features such as automatic image compression, resizing, caching, and converting images to more efficient formats like WebP where possible. These features significantly improve the user and developer experience.
By default, the Next.js default image optimization API will optimize and serve your images from the Next.js server. However, you can opt out of this default behavior and use other image optimization solutions such as Fastly Image Optimizer.
\\nTherefore, the Next.js Image Component is not only feature-rich but flexible as well. In this article, we’ll explore automatic image optimization using the Next.js built-in Image
component.
Image
A picture is worth a thousand words, and images are an important part of the web for communicating with users. Defining them in their most basic form is straightforward using the HTML <img>
element:
<img src=\\"image.jpg\\">\\n\\n
You can add an alternative text (alt
) to describe the image for screen readers or if the browser fails to load it:
<img src=\\"image.jpg alt=\\"describe the image here\\"/>\\n\\n
But with images on the web, the devil is in the details. To improve user and developer experience, there is a need for automatic image optimization.
\\nTo serve the most optimal image efficiently, you need to focus on the image size, format, and responsiveness. This may be easy when dealing with static images such as icons.
\\nHowever, some images, such as profile photos, are uploaded dynamically by users at runtime. Without automatic image optimization, it becomes impossible to serve such images efficiently.
\\nEditor’s note: This post was updated by Joseph Mawa in April 2025 to reflect the latest updates in Next.js version 15 and add modern alternatives to next/image
.
Image
componentFrameworks like Next.js offer an abstraction that solves the most common, mundane, and complex tasks like routing, internationalization, and image optimization.
\\nAccording to the Next.js team, the goal of Next.js is to improve two things: the developer and user experiences. While most of the optimization focuses on reducing the amount of JavaScript shipped to users, there are other resources, like images, that need optimization as well. Enter Next.js 10.
\\nVercel, the company behind Next.js, released Next.js version 10 with a built-in image component, next/image
, for automatic image optimization, providing five benefits.
With optimized images that are loaded lazily by default, users can expect a performance boost in website load time, ultimately improving the overall user experience.
\\n\\nWith next/image
’s simple-to-use API, developers have an improved experience themselves with the ability to define a basic image, tweak it to their requirements, or delve into advanced configuration options like caching and loaders.
Build times aren’t increased as a side-effect of optimization. Next.js optimizes images on demand as users request them, instead of at build time.
\\nImages are lazy loaded by default and can be served in modern formats like WebP in supported web browsers.
\\nNext.js can also automatically adopt future image formats and serve them to browsers that support those formats.
\\nImage
ComponentThe built-in next/image
API is the sweet spot for image optimization. It exposes an Image
component as a single source of truth. This means you only need to learn how to use one API to handle image optimization in Next.js.
In its most basic form, the built-in Next.js image component is similar to the HTML <img>
element. They both accept src
and alt
attributes:
import Image from \\"next/image\\";\\n\\nexport default function CardImage({ imageSrc, imageAltText }) {\\n return (\\n <div classname=\\"cardImageWrapper\\">\\n <Image src={imageSrc} alt={imageAltText} />\\n </div>\\n );\\n}\\n\\n
The Image
component accepts several other props. Some are required while others are optional. The src
, width
, height
, and alt
props are required. The others are optional.
Some of the optional props are used for customizing the default behavior of the Image
component. Let’s explore some of the Next.js built-in image optimization features.
The devices we use have varying viewport widths, screen sizes, and resolutions. With the Next.js image component, you can use the sizes
prop to specify the image width at different breakpoints. The value of the sizes
prop is a string similar to a CSS media query:
<Image\\n fill\\n src=\\"/example.png\\"\\n sizes=\\"(max-width: 768px) 100vw, (max-width: 1200px) 50vw, 33vw\\"\\n/>;\\n>\\n
Next.js will use the sizes
prop with the deviceSizes
and imageSizes
props that are configured in the next.config
file to generate your image srcset
s. The browser will then select and fetch the appropriate image based on the user’s viewport width, screen size, and resolution.
If you don’t configure the device and image sizes yourself, Next.js will use the default values below. The default values are usually sufficient. You don’t need to modify them:
\\nmodule.exports = {\\n images: {\\n deviceSizes: [640, 750, 828, 1080, 1200, 1920, 2048, 3840],\\n imageSizes: [16, 32, 48, 64, 96, 128, 256, 384],\\n },\\n}\\n\\n
Images make up a significant proportion of resources on the web. They come in different sizes and formats. Some image formats are more efficient than others.
\\nBy default, Next.js converts images to WebP format in web browsers that support it. This is because WebP images have smaller file sizes than other formats like JPG and PNG. Similarly, a compressed WebP image retains a higher quality than other image formats.
\\nYou can configure the default image format using the next.config.js
file, like in the code below. The default format is image/webp
if you don’t specify your own:
module.exports = {\\n images: {\\n formats: [\'image/avif\'],\\n },\\n}\\n\\n
Next.js has a built-in feature for automatic image compression. It compresses the image to reduce the size of the image file. Smaller images lead to reduced bandwidth usage and faster page load. You can use the quality
prop to specify the quality of the compressed image.
The value of the quality
prop should be an integer from 1
to 100
. It defines the quality of the compressed image; 1
being the lowest quality and 100
being the highest. It defaults to 75
:
<Image src=\\"image-src\\" alt=\\"image-alt-text\\" quality={100} layout=\\"fill\\" />;\\n\\n
You can also use the next.config.js
file to specify the qualities you want instead of allowing all the qualities from 1
to 100
. As an example, the configuration below allows only three qualities: 30
, 50
, and 70
:
module.exports = {\\n images: {\\n qualities: [30, 50, 70],\\n },\\n}\\n>\\n
If the quality
you pass to the Image
component is not one of the values you specified in the qualities
array in the next.config.js
file, you will get an HTTP response of 400(Bad Request)
.
Similar to the HTML <img>
element, the next/image
component has the loading
property you can use to specify when the browser should load an image.
Unlike in the HTML <img>
element where the loading
attribute defaults to eager
, in the Next.js image component, it defaults to lazy
. If you want the default, you can omit the attribute instead of explicitly setting it like so:
<Image src=\\"image-src\\" alt=\\"image-alt-text\\" layout=\\"fill\\" loading=\\"lazy\\" />;\\n>\\n
If the loading
property is set to lazy
, the browser defers loading off-screen images until the user is about to scroll them into view. Lazy loading images results in faster initial page load, reduced bandwidth usage, and overall improvement in user experience.
Next.js recommends that you lazy load off-screen images for reduced initial page load time and bandwidth usage. On the other hand, you need to preload images that are above the fold for improved performance. The built-in next/image
component has the priority
prop for doing just that.
If you set the value of the priority
prop to true
, the image is considered high-priority. Therefore, the browser will load it as early as possible. Using this property to preload above-the-fold images, such as icons and hero images, will improve the load time of your web page or app:
<Image\\n src=\\"image-src\\"\\n alt=\\"image-alt-text\\"\\n width={500}\\n height={300}\\n priority\\n/>;\\n\\n
You need to be aware that setting priority
of an image to true
will automatically disable lazy loading. Similarly, you shouldn’t use the priority prop on lazy-loaded images.
Despite all the built-in image optimization techniques highlighted above, most images take longer to load than other static assets. Therefore, to reduce cumulative layout shift and provide a better user experience, it is necessary to display a placeholder image while an image is still loading.
\\nYou can use the placeholder
property to specify a fallback image while an image is loading. Its possible values are blur
, empty
, and data:image/
. It defaults to empty
, and when it is empty
, there is no placeholder while the image is loading — only an empty space:
<Image\\n placeholder=\\"blur\\"\\n quality={100}\\n/>;\\n\\n
By default, Next.js dynamically optimizes images on request and caches them in the <distDir>/cache/images
directory. It will reuse the optimized image until the cache expires.
Next.js uses the Cache-Control
header or minimumCacheTTL
configuration option, depending on whichever is larger, to determine the maximum age of the cached image. You can use the next.config.js
object to change the default time-to-live (TTL) for cached optimized images like so:
module.exports = {\\n images: {\\n minimumCacheTTL: 120, // 120 seconds\\n },\\n}\\n\\n
If you don’t specify a custom value as in the code above, Next.js will use the default value of 60 seconds.
\\nFor Static image import, Next.js will hash the file contents and cache the file forever. Keep in mind that static imports can be disabled:
\\njs\\nmodule.exports = {\\n images: {\\n disableStaticImages: true,\\n },\\n};\\n\\n
The Next.js docs have a detailed explanation of the default caching behavior. However, if you use a different loader, like Cloudinary, you must refer to its documentation to see how to enable caching.
\\nnext/image
By default, Next.js optimizes images and serves them directly from the Next.js web server. That is the default behavior of the built-in image optimization API.
\\nHowever, for some reason, you may want to delegate image optimization to other cloud providers like Cloudinary, Cloudflare, Contentful, and Fastly.
\\nMost of the cloud providers are not exactly alternatives to the next/image
component because you can still use them with the next/image
component. However, Next.js won’t be responsible for the image optimization. You’re delegating the image optimization to the cloud provider.
To opt out of the Next.js automatic image optimization, you need to create a custom loader. A loader is a function that takes the image src
, width
, and quality
as parameters and returns the image URL string. You can create a custom loader for each instance of the next/image
component or declare a custom loader in the next.config.js
file.
The code below illustrates how you can create a custom loader for a particular instance of the next/image
component. It declares a custom loader for the Cloudinary Image API and sets it as the value of the next/image
component’s loader
prop:
import Image from \\"next/image\\";\\n\\n//https://res.cloudinary.com/demo/image/upload/w_300,c_limit,q_auto/turtles.jpg\\nfunction cloudinaryLoader({ src, width, quality }) {\\n const params = [\\"f_auto\\", \\"c_limit\\", `w_${width}`, `q_${quality || \\"auto\\"}`];\\n return `https://res.cloudinary.com/demo/image/upload/${params.join(\\n \\",\\"\\n )}${src}`;\\n}\\n\\nexport default function TurtleImage() {\\n return (\\n <Image\\n src=\\"/turtles.jpg\\"\\n loader={cloudinaryLoader}\\n width={300}\\n height={300}\\n alt=\\"Turtle\\"\\n priority={true}\\n />\\n );\\n}\\n\\n
Alternatively, you can also use the loaderFile
property of your next.config.js
file to configure a custom loader for all instances of your next/image
component like so:
module.exports = {\\n images: {\\n loader: \'custom\',\\n loaderFile: \'./custom-loader.js\',\\n },\\n}\\n\\n
The value of the loaderFile
property is the path to your custom loader. This path should be relative to the root directory of your application.
export default function customImageLoader({ src, width, quality }) {\\n return `https://example.com/${src}?w=${width}&q=${quality || 75}`;\\n}\\n\\n
You can check these examples in the Next.js docs to learn how to create custom loaders for the common cloud providers.
\\nSome cloud providers also provide their own image components that you can use instead of the next/image
component. As an example, ImageKit has the Next.js ImageKit SDK for rendering images optimized by ImageKit. You need to read the documentation for the corresponding cloud provider to know whether it has a custom image component you can use in a Next.js project.
Image optimization in Next.js improves the user and developer experience with a native, game-changing, and powerful API that’s easy to work with and extend. This inadvertently solves a major Core Web Vitals need and helps websites achieve a higher SEO rank, all starting and ending with next/image
.
Would you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nuseFormContext
in React Hook Form\\n FormData
API\\n Forms are an essential part of how users interact with websites and web applications. Validating a user’s data passed through a form is a crucial responsibility for a developer.
\\nReact Hook Form is a library that helps validate forms in React. It is a minimal library without any other dependencies, and is performant and straightforward to use, requiring developers to write fewer lines of code than other form libraries.
\\nReact 19 introduces built-in form handling. So, you might be asking: Is React Hook Form still worth using? In this guide, you will learn the differences, advantages, and best use cases of React Hook Form in 2025.
\\nReact Hook Form takes a slightly different approach than other form libraries in the React ecosystem by using uncontrolled inputs with ref
instead of depending on the state to control the inputs. This approach makes the forms more performant and reduces the number of re-renders. This also means that React Hook Form offers seamless integration with UI libraries because most libraries support the ref
attribute.
React Hook Form’s size is very small (just 8.6 kB minified and gzipped), and it has zero dependencies. The API is also very intuitive, which provides a seamless experience to developers. The library follows HTML standards for validating the forms using a constraint-based validation API.
\\nTo install React Hook Form, run the following command:
\\nnpm install react-hook-form\\n\\n
Editor’s note: This article was last updated by Isaac Okoro in April 2025 to reflect React 19’s new form-handling features, as well as to compare React Hook Form with React 19’s built-in form handling, explaining the differences and reinforcing when RHS is still the best option.
\\nIn this section, you will learn about the fundamentals of the useForm
Hook by creating a very basic registration form.
First, import the useForm
Hook from the react-hook-form
package:
import { useForm } from \\"react-hook-form\\";\\n\\n
Then, inside your component, use the Hook as follows:
\\nconst { register, handleSubmit } = useForm();\\n\\n
The useForm
Hook returns an object containing a few properties. For now, we’ll only require register
and handleSubmit
.
The register
method helps you register an input field into React Hook Form so that it is available for validation, and its value can be tracked for changes.
To register the input, we’ll pass the register
method into the input field as such:
<input type=\\"text\\" name=\\"firstName\\" {...register(\'firstName\')} />\\n\\n
This spread operator syntax is a new implementation in the library that enables strict type checking in forms with TypeScript. You can learn more about strict type checking in React Hook Form here.
\\nReact Hook Form versions older than v7 had the register
method attached to the ref
attribute as such:
<input type=\\"text\\" name=\\"firstName\\" ref={register} />\\n\\n
Note that the input component must have a name
prop, and its value should be unique. The handleSubmit
method, as the name suggests, manages form submission. It needs to be passed as the value to the onSubmit
prop of the form
component.
The handleSubmit
method can handle two functions as arguments. The first function passed as an argument will be invoked along with the registered field values when the form validation is successful. The second function is called with errors when the validation fails:
const onFormSubmit = data => console.log(data);\\n\\nconst onErrors = errors => console.error(errors);\\n\\n<form onSubmit={handleSubmit(onFormSubmit, onErrors)}>\\n {/* ... */}\\n</form>\\n\\n
Now that you have a fair idea about the basic usage of the useForm
Hook, let’s look at a more realistic example:
import React from \\"react\\";\\nimport { useForm } from \\"react-hook-form\\";\\n\\nconst RegisterForm = () => {\\n const { register, handleSubmit } = useForm();\\n const handleRegistration = (data) => console.log(data);\\n\\n return (\\n <form onSubmit={handleSubmit(handleRegistration)}>\\n <div>\\n <label>Name</label>\\n <input name=\\"name\\" {...register(\'name\')} />\\n </div>\\n <div>\\n <label>Email</label>\\n <input type=\\"email\\" name=\\"email\\" {...register(\'email\')} />\\n </div>\\n <div>\\n <label>Password</label>\\n <input type=\\"password\\" name=\\"password\\" {...register(\'password\')} />\\n </div>\\n <button>Submit</button>\\n </form>\\n );\\n};\\nexport default RegisterForm;\\n\\n
As you can see, no other components were imported to track the input values. The useForm
Hook makes the component code cleaner and easier to maintain, and because the form is uncontrolled, you do not have to pass props like onChange
and value
to each input.
You can use any other UI library of your choice to create the form. But first, make sure to check the documentation and find the prop used for accessing the reference attribute of the native input component.
\\nIn the next section, you will learn how to handle form validation in the form you just built.
\\n\\nTo apply validations to a field, you can pass validation parameters to the register
method. Validation parameters are similar to the existing HTML form validation standard. These validation parameters include the following properties:
required
indicates if the field is required or not. If this property is set to true
, then the field cannot be emptyminlength
and maxlength
set the minimum and maximum length for a string input valuemin
and max
set the minimum and maximum values for a numerical valuetype
indicates the type of the input field; it can be email, number, text, or any other standard HTML input typespattern
defines a pattern for the input value using a regular expressionIf you want to mark a field as required
, your code should turn out like this:
<input name=\\"name\\" type=\\"text\\" {...register(\'name\', { required: true } )} />\\n\\n
Now try submitting the form with this field empty. This will result in the following error object:
\\n{\\nname: {\\n type: \\"required\\",\\n message: \\"\\",\\n ref: <input name=\\"name\\" type=\\"text\\" />\\n }\\n}\\n\\n
Here, the type
property refers to the type of validation that failed, and the ref
property contains the native DOM input element.
You can also include a custom error message for the field by passing a string instead of a Boolean to the validation property:
\\n// ...\\n<form onSubmit={handleSubmit(handleRegistration, handleError)}>\\n <div>\\n <label>Name</label>\\n <input name=\\"name\\" {...register(\'name\', { required: \\"Name is required\\" } )} />\\n </div>\\n</form>\\n\\n
Then, access the errors object by using the useForm
Hook:
const { register, handleSubmit, formState: { errors } } = useForm();\\n\\n
You can display errors to your users like so:
\\nconst RegisterForm = () => {\\n const { register, handleSubmit, formState: { errors } } = useForm();\\n const handleRegistration = (data) => console.log(data);\\n\\n return (\\n <form onSubmit={handleSubmit(handleRegistration)}>\\n <div>\\n <label>Name</label>\\n <input type=\\"text\\" name=\\"name\\" {...register(\'name\')} />\\n {errors?.name && errors.name.message}\\n </div>\\n {/* more input fields... */}\\n <button>Submit</button>\\n </form>\\n );\\n};\\n\\n
Below you can find the complete example:
\\nimport React from \\"react\\";\\nimport { useForm } from \\"react-hook-form\\";\\n\\nconst RegisterForm = () => {\\n const { register, handleSubmit, formState: { errors } } = useForm();\\n const handleRegistration = (data) => console.log(data);\\n const handleError = (errors) => {};\\n\\n const registerOptions = {\\n name: { required: \\"Name is required\\" },\\n email: { required: \\"Email is required\\" },\\n password: {\\n required: \\"Password is required\\",\\n minLength: {\\n value: 8,\\n message: \\"Password must have at least 8 characters\\"\\n }\\n }\\n };\\n\\n return (\\n <form onSubmit={handleSubmit(handleRegistration, handleError)}>\\n <div>\\n <label>Name</label>\\n <input name=\\"name\\" type=\\"text\\" {...register(\'name\', registerOptions.name) }/>\\n <small className=\\"text-danger\\">\\n {errors?.name && errors.name.message}\\n </small>\\n </div>\\n <div>\\n <label>Email</label>\\n <input\\n type=\\"email\\"\\n name=\\"email\\"\\n {...register(\'email\', registerOptions.email)}\\n />\\n <small className=\\"text-danger\\">\\n {errors?.email && errors.email.message}\\n </small>\\n </div>\\n <div>\\n <label>Password</label>\\n <input\\n type=\\"password\\"\\n name=\\"password\\"\\n {...register(\'password\', registerOptions.password)}\\n />\\n <small className=\\"text-danger\\">\\n {errors?.password && errors.password.message}\\n </small>\\n </div>\\n <button>Submit</button>\\n </form>\\n );\\n};\\nexport default RegisterForm;\\n\\n
If you want to validate the field when there is an onChange
or onBlur
event, you can pass a mode
property to the useForm
Hook:
const { register, handleSubmit, errors } = useForm({\\n mode: \\"onBlur\\"\\n});\\n\\n
You can find more details on the useForm
Hook in the API reference.
In some cases, the external UI component you want to use in your form may not support ref
, and can only be controlled by the state.
React Hook Form has provisions for such cases and can easily integrate with any third-party-controlled components using a Controller
component.
React Hook Form provides the wrapper Controller
component that allows you to register a controlled external component, similar to how the register
method works. In this case, instead of the register
method, you will use the control
object from the useForm
Hook:
const { register, handleSubmit, control } = useForm();\\n\\n
Say that you have to create a role field in your form that will accept values from a select input. You can create the select input using the react-select
library.
The control
object should be passed to the control
prop of the Controller
component, along with the name
of the field. You can specify the validation rules using the rules
prop.
The controlled component should be passed to the Controller
component using the as
prop. The Select
component also requires an options
prop to render the dropdown options:
<Controller\\n name=\\"role\\"\\n control={control}\\n defaultValue=\\"\\"\\n rules={registerOptions.role}\\n render={({ field }) => (\\n <Select options={selectOptions} {...field} label=\\"Text field\\" />\\n )}\\n/>\\n\\n
The render
prop above provides onChange
, onBlur
, name
, ref
, and value
to the child component. By spreading field
into the Select
component, React Hook Form registers the input field.
You can check out the complete example for the role field below:
\\nimport { useForm, Controller } from \\"react-hook-form\\";\\nimport Select from \\"react-select\\";\\n// ...\\nconst { register, handleSubmit, errors, control } = useForm({\\n // use mode to specify the event that triggers each input field \\n mode: \\"onBlur\\"\\n});\\n\\nconst selectOptions = [\\n { value: \\"student\\", label: \\"Student\\" },\\n { value: \\"developer\\", label: \\"Developer\\" },\\n { value: \\"manager\\", label: \\"Manager\\" }\\n];\\n\\nconst registerOptions = {\\n // ...\\n role: { required: \\"Role is required\\" }\\n};\\n\\n// ...\\n<form>\\n <div>\\n <label>Your Role</label>\\n <Controller\\n name=\\"role\\"\\n control={control}\\n defaultValue=\\"\\"\\n rules={registerOptions.role}\\n render={({ field }) => (\\n <Select options={selectOptions} {...field} label=\\"Text field\\" />\\n )}\\n />\\n <small className=\\"text-danger\\">\\n {errors?.role && errors.role.message}\\n </small>\\n </div>\\n</form>\\n\\n
You can also go through the API reference for the Controller
component for a detailed explanation.
useFormContext
in React Hook FormuseFormContext
is a hook provided by React Hook Form that allows you to access and manipulate the form context/state of deeply nested components. It allows you to share form methods like register
, errors
, control
, etc., within a component without passing props down through multiple levels.
useFormContext
is useful when you need to access form methods in deeply nested components or when using custom hooks that need to interact with the form state. Here is how to use useFormContext
:
import React from \'react\';\\nimport { useForm, FormProvider, useFormContext } from \'react-hook-form\';\\n\\nconst Input = ({ name }) => {\\n const { register } = useFormContext();\\n return <input {...register(name)} />;\\n};\\n\\nconst ContextForm = () => {\\n const methods = useForm();\\n return (\\n <FormProvider {...methods}>\\n <form onSubmit={methods.handleSubmit(data => console.log(data))}>\\n <Input name=\\"firstName\\" />\\n <Input name=\\"lastName\\" />\\n <button type=\\"submit\\">Submit</button>\\n </form>\\n </FormProvider>\\n );\\n};\\n\\nexport default ContextForm;\\n\\n
In the example above, Input
component uses the useFormContext
Hook to access the form method register
, allowing it to register the input field without prop drilling from the parent component.
You can also create a component to make it easier for developers to handle more complex forms, such as when inputs are deeply nested within component trees:
\\nimport { FormProvider, useForm, useFormContext } from \\"react-hook-form\\";\\n\\nexport const ConnectForm = ({ children }) => {\\n const methods = useFormContext();\\n return children({ ...methods });\\n};\\n\\nexport const DeepNest = () => (\\n <ConnectForm>\\n {({ register }) => <input {...register(\\"hobbies\\")} />}\\n </ConnectForm>\\n);\\n\\nexport const App = () => {\\n const methods = useForm();\\n\\n return (\\n <FormProvider {...methods}>\\n <form>\\n <DeepNest />\\n </form>\\n </FormProvider>\\n );\\n};\\n\\n
React Hook Form supports arrays and nested fields out of the box, allowing you to easily handle complex data structures.
\\nTo work with arrays, you can use the useFieldArray
Hook. This is a custom hook provided by React Hook Form that helps with handling form fields, such as arrays of inputs. The hook provides methods to add, remove, and swap array items. Let’s see the useFieldArray
Hook in action:
import React from \'react\';\\nimport { useForm, FormProvider, useFieldArray, useFormContext } from \'react-hook-form\';\\n\\nconst Hobbies = () => {\\n const { control, register } = useFormContext();\\n const { fields, append, remove } = useFieldArray({\\n control,\\n name: \'hobbies\'\\n });\\n\\n return (\\n <div>\\n {fields.map((field, index) => (\\n <div key={field.id}>\\n <input {...register(`hobbies.${index}.name`)} />\\n <button type=\\"button\\" onClick={() => remove(index)}>Remove</button>\\n </div>\\n ))}\\n <button type=\\"button\\" onClick={() => append({ name: \'\' })}>Add Hobby</button>\\n </div>\\n );\\n};\\n\\nconst MyForm = () => {\\n const methods = useForm();\\n\\n const onSubmit = data => {\\n console.log(data);\\n };\\n\\n return (\\n <FormProvider {...methods}>\\n <form onSubmit={methods.handleSubmit(onSubmit)}>\\n <Hobbies />\\n <button type=\\"submit\\">Submit</button>\\n </form>\\n </FormProvider>\\n );\\n};\\n\\nexport default MyForm;\\n\\n
From the above code, the Hobbies
component uses useFieldArray
to manage an array of hobbies. Users can add or remove hobbies dynamically, and each hobby has its own set of fields.
You can also opt to control the entire field array to update the field
object with each onChange
event. You can map the watched field array values to the controlled fields to make sure that input changes reflect on the field
object:
import React from \'react\';\\nimport { useForm, FormProvider, useFieldArray, useFormContext } from \'react-hook-form\';\\n\\nconst Hobbies = () => {\\n const { control, register, watch } = useFormContext();\\n const { fields, append, remove } = useFieldArray({\\n control,\\n name: \'hobbies\'\\n });\\n const watchedHobbies = watch(\\"hobbies\\");\\n const controlledFields = fields.map((field, index) => ({\\n ...field,\\n ...watchedHobbies[index]\\n }));\\n\\n return (\\n <div>\\n {controlledFields.map((field, index) => (\\n <div key={field.id}>\\n <input {...register(`hobbies.${index}.name`)} defaultValue={field.name} />\\n <button type=\\"button\\" onClick={() => remove(index)}>Remove</button>\\n </div>\\n ))}\\n <button type=\\"button\\" onClick={() => append({ name: \'\' })}>Add Hobby</button>\\n </div>\\n );\\n};\\n\\nconst MyForm = () => {\\n const methods = useForm({\\n defaultValues: {\\n hobbies: [{ name: \\"Reading\\" }]\\n }\\n });\\n\\n const onSubmit = data => {\\n console.log(data);\\n };\\n\\n return (\\n <FormProvider {...methods}>\\n <form onSubmit={methods.handleSubmit(onSubmit)}>\\n <Hobbies />\\n <button type=\\"submit\\">Submit</button>\\n </form>\\n </FormProvider>\\n );\\n};\\n\\nexport default MyForm;\\n\\n
The code above uses the watch
function to monitor changes to the hobbies
field array and controlledFields
to make sure that each input reflects its latest state.
Nested fields can be handled similarly to arrays. You just need to specify the correct path using dot notation when registering inputs:
\\nimport React from \'react\';\\nimport { useForm, FormProvider, useFormContext } from \'react-hook-form\';\\n\\nconst Address = () => {\\n const { register } = useFormContext();\\n return (\\n <div>\\n <input {...register(\'address.street\')} placeholder=\\"Street\\" />\\n <input {...register(\'address.city\')} placeholder=\\"City\\" />\\n </div>\\n );\\n};\\n\\nconst MyForm = () => {\\n const methods = useForm();\\n\\n const onSubmit = data => {\\n console.log(data);\\n };\\n\\n return (\\n <FormProvider {...methods}>\\n <form onSubmit={methods.handleSubmit(onSubmit)}>\\n <Address />\\n <button type=\\"submit\\">Submit</button>\\n </form>\\n </FormProvider>\\n );\\n};\\n\\nexport default MyForm;\\n\\n
In the code above, the Address
component registers fields for street
and city
under the address
object in the form state. This way, the form data will be structured as an object with nested properties:
{\\n \\"address\\": {\\n \\"street\\": \\"value\\",\\n \\"city\\": \\"value\\"\\n }\\n}\\n\\n
Using useFormContext
with a deeply nested field can affect the performance of your application when it is not managed properly because the FormProvider
triggers a re-render whenever the form state updates. Using a tool like React memo
can help optimize performance when using the useFormContext
Hook by preventing unnecessary re-renders.
React Hook Form supports validation for arrays and nested fields using the Yup or Zod validation libraries.
\\nThe following example sets up validation for the hobbies
array and the address
object using Yup schema validation. Each hobby name and address field is validated according to the specified rules:
import React from \'react\';\\nimport { useForm, FormProvider, useFieldArray, useFormContext } from \'react-hook-form\';\\nimport { yupResolver } from \'@hookform/resolvers/yup\';\\nimport * as yup from \'yup\';\\n\\nconst schema = yup.object().shape({\\n hobbies: yup.array().of(\\n yup.object().shape({\\n name: yup.string().required(\'Hobby is required\')\\n })\\n ),\\n address: yup.object().shape({\\n street: yup.string().required(\'Street is required\'),\\n city: yup.string().required(\'City is required\')\\n })\\n});\\n\\nconst Hobbies = () => {\\n const { control, register } = useFormContext();\\n const { fields, append, remove } = useFieldArray({\\n control,\\n name: \'hobbies\'\\n });\\n\\n return (\\n <div>\\n {fields.map((field, index) => (\\n <div key={field.id}>\\n <input {...register(`hobbies.${index}.name`)} placeholder=\\"Hobby\\" />\\n <button type=\\"button\\" onClick={() => remove(index)}>Remove</button>\\n </div>\\n ))}\\n <button type=\\"button\\" onClick={() => append({ name: \'playing football\' })}>Add Hobby</button>\\n </div>\\n );\\n};\\n\\nconst Address = () => {\\n const { register } = useFormContext();\\n return (\\n <div>\\n <input {...register(\'address.street\')} placeholder=\\"Street\\" />\\n <input {...register(\'address.city\')} placeholder=\\"City\\" />\\n </div>\\n );\\n};\\n\\nconst App = () => {\\n const methods = useForm({\\n resolver: yupResolver(schema)\\n });\\n\\n const onSubmit = data => {\\n console.log(data);\\n };\\n\\n return (\\n <FormProvider {...methods}>\\n <form onSubmit={methods.handleSubmit(onSubmit)}>\\n <Hobbies />\\n <Address />\\n <button type=\\"submit\\">Submit</button>\\n </form>\\n </FormProvider>\\n );\\n};\\n\\nexport default App;\\n\\n
One of the most exciting upgrades I have witnessed in React 19 is its complete overhaul of form handling. If you’ve been building React apps for a while, you’re probably familiar with the fact that you have to control inputs, manage state, and then handle submission.
\\nReact 19 changes all that with a better approach that gets back to the web fundamentals.
\\nIn React 18, we had to manually:
\\nuseState
For example, a simple form in React 18 looked something like this:
\\nfunction ContactForm() {\\n const [email, setEmail] = useState(\'\');\\n const [name, setName] = useState(\'\');\\n const [isSubmitting, setIsSubmitting] = useState(false);\\n\\n const handleSubmit = async (e) => {\\n e.preventDefault();\\n setIsSubmitting(true);\\n try {\\n await submitFormData({ email, name });\\n setEmail(\'\');\\n setName(\'\');\\n } catch (error) {\\n console.error(error);\\n } finally {\\n setIsSubmitting(false);\\n }\\n };\\n\\n return (\\n <form onSubmit={handleSubmit}>\\n <input \\n type=\\"text\\" \\n value={name} \\n onChange={(e) => setName(e.target.value)} \\n />\\n <input \\n type=\\"email\\" \\n value={email} \\n onChange={(e) => setEmail(e.target.value)} \\n />\\n <button type=\\"submit\\" disabled={isSubmitting}>\\n {isSubmitting ? \'Submitting...\' : \'Submit\'}\\n </button>\\n </form>\\n );\\n}\\n\\n
This approach always felt a bit weird — maybe not to React devs but certainly to developers coming from other frameworks. So in React 19, there had to be a few major changes. You can check out the full code and demo for further reference.
\\nReact 19 introduced Actions, asynchronous functions that handle form submissions directly.
\\nWith the new approach in React, we will now treat form inputs as elements that do not need to be tracked in React state. This takes advantage of the browser’s built-in form-handling qualities.
\\nHere’s how the form from above would look in React 19:
\\nfunction ContactForm() {\\n const submitContact = async (prevState, formData) => {\\n const name = formData.get(\'name\');\\n const email = formData.get(\'email\');\\n\\n // Process form data (e.g., send to API)\\n // No need to preventDefault or reset form - React handles it\\n return { success: true, message: `Thanks ${name}!` };\\n };\\n\\n const [state, formAction] = useActionState(submitContact, {});\\n\\n return (\\n <form action={formAction}>\\n <input type=\\"text\\" name=\\"name\\" />\\n <input type=\\"email\\" name=\\"email\\" />\\n <SubmitButton />\\n {state.success && <p>{state.message}</p>}\\n </form>\\n );\\n}\\n\\nfunction SubmitButton() {\\n const { pending } = useFormStatus();\\n return (\\n <button disabled={pending}>\\n {pending ? \'Submitting...\' : \'Submit\'}\\n </button>\\n );\\n}\\n\\n
React 19 embraces standard HTML form behavior by using the action
attribute instead of onSubmit
. This means:
e.preventDefault()
FormData
APIActions receive a FormData
object, which is a native browser API. You access values with:
formData.get(\'fieldName\'); // Gets single value\\nformData.getAll(\'multipleSelect\'); // Gets multiple selected values\\n\\n
N.B., you’ll need to convert this to a regular object if sending as JSON:
\\nconst payload = Object.fromEntries(formData.entries());\\n\\n
React 19 introduces new hooks that save you a few lines of code. These hooks include:
\\nuseActionState
: This connects forms to action functions and tracks the response stateuseFormStatus
: Provides submission status (pending, loading, etc.)Perhaps the biggest and yet one of my favorite quality-of-life improvements from React is that inputs no longer need state:
\\n// React 18\\n<input value={email} onChange={(e) => setEmail(e.target.value)} />\\n\\n// React 19\\n<input name=\\"email\\" />\\n\\n
The browser takes charge of managing the input state for you!
\\nIf you’re using a framework like Next.js, React 19’s form Actions seamlessly connect with server Actions:
\\nfunction ContactForm() {\\n async function submitToServer(prevState, formData) {\\n \'use server\'; // This marks it as a server action\\n // Process server-side (database, etc.)\\n return { success: true };\\n }\\n\\n const [state, formAction] = useActionState(submitToServer, {});\\n\\n return (\\n <form action={formAction}>\\n {/* form inputs */}\\n </form>\\n );\\n}\\n\\n
This is where React 19 still has some gaps. So far, there’s no built-in validation system beyond standard HTML validation attributes (required
, pattern
, etc.).
For complex validation, I advise you to use a few options:
\\nThe table highlights the differences and similarities of this to useful form integrations:
\\nFeature | \\nReact Hook Form | \\nReact 19 native forms | \\n
---|---|---|
Built-in form handling | \\nNo, requires RHF setup | \\nYes, now built-in | \\n
Validation | \\nSupports external validation libraries (Zod, Yup) | \\nBasic HTML validation with manual implementation | \\n
Performance | \\nOptimized for large forms, | \\nMinimal re-renders | \\n
Form submission | \\nControlled via hooks (handleSubmit) | \\nMore declarative in React 19 | \\n
Learning curve | \\nSimple with a custom API to learn | \\nLow because it follows HTML standards | \\n
Complex forms | \\nIt has a built-in solution | \\nWill require a custom code | \\n
TypeScript support | \\nExcellent | \\nBasic | \\n
Error handling | \\nBuilt-in support | \\nManual implementation | \\n
Field arrays | \\nBuilt-in support | \\nManual implementation | \\n
Bundle size | \\n~13kb (minified + gzipped) | \\nNo extra size as it is part of React | \\n
You will likely prefer using React Hook Form for the following use cases:
\\nReact Hook Form performs at a better level when managing large forms with a particularly large number of fields. This is how you would use it:
\\nfunction LargeApplicationForm() {\\n // RHF only re-renders fields that change, not the entire form\\n const { register, handleSubmit, formState } = useForm({\\n mode: \\"onChange\\", // Validate on change for immediate feedback\\n defaultValues: {\\n personalInfo: { firstName: \\"\\", lastName: \\"\\", email: \\"\\" },\\n address: { street: \\"\\", city: \\"\\", zipCode: \\"\\" },\\n employment: { company: \\"\\", position: \\"\\", yearsOfExperience: \\"\\" },\\n education: { degree: \\"\\", institution: \\"\\", graduationYear: \\"\\" },\\n // ... dozens more fields\\n }\\n });\\n\\n return (\\n <form onSubmit={handleSubmit(onSubmit)}>\\n <PersonalInfoSection register={register} errors={formState.errors} />\\n <AddressSection register={register} errors={formState.errors} />\\n {/* Many more sections */}\\n <button type=\\"submit\\">Submit Application</button>\\n </form>\\n );\\n}\\n\\n
The code above ensures that when a user types in one field, only that specific field re-renders, not the whole form.
\\nWorking with dynamic fields (adding/removing items) is much simpler with RHF’s useFieldArray
Hook:
function OrderForm() {\\n const { control, register, handleSubmit } = useForm({\\n defaultValues: {\\n items: [{ product: \\"\\", quantity: 1, price: 0 }]\\n }\\n });\\n\\n const { fields, append, remove, move } = useFieldArray({\\n control,\\n name: \\"items\\"\\n });\\n\\n // Calculate total order value\\n const watchItems = useWatch({ control, name: \\"items\\" });\\n const total = watchItems.reduce((sum, item) => \\n sum + (item.quantity * item.price), 0);\\n\\n return (\\n <form onSubmit={handleSubmit(onSubmit)}>\\n {fields.map((field, index) => (\\n <div key={field.id} className=\\"item-row\\">\\n <select {...register(`items.${index}.product`)}>\\n <option value=\\"\\">Select Product</option>\\n <option value=\\"product1\\">Product 1</option>\\n <option value=\\"product2\\">Product 2</option>\\n </select>\\n\\n <input \\n type=\\"number\\" \\n {...register(`items.${index}.quantity`, { \\n valueAsNumber: true,\\n min: 1\\n })} \\n />\\n\\n <input \\n type=\\"number\\" \\n {...register(`items.${index}.price`, { \\n valueAsNumber: true \\n })}\\n />\\n\\n <button type=\\"button\\" onClick={() => remove(index)}>\\n Remove\\n </button>\\n\\n {/* Move up/down buttons */}\\n </div>\\n ))}\\n\\n <button type=\\"button\\" onClick={() => append({ product: \\"\\", quantity: 1, price: 0 })}>\\n Add Item\\n </button>\\n\\n <div>Total: ${total.toFixed(2)}</div>\\n <button type=\\"submit\\">Place Order</button>\\n </form>\\n );\\n}\\n\\n
RHF’s integration with validation libraries like Zod makes complex validation much easier:
\\n// Define complex schema with interdependent validations\\nconst schema = z.object({\\n password: z.string()\\n .min(8, \\"Password must be at least 8 characters\\")\\n .regex(/[A-Z]/, \\"Password must contain at least one uppercase letter\\")\\n .regex(/[a-z]/, \\"Password must contain at least one lowercase letter\\")\\n .regex(/[0-9]/, \\"Password must contain at least one number\\")\\n .regex(/[^A-Za-z0-9]/, \\"Password must contain at least one special character\\"),\\n confirmPassword: z.string(),\\n // More fields...\\n}).refine(data => data.password === data.confirmPassword, {\\n message: \\"Passwords don\'t match\\",\\n path: [\\"confirmPassword\\"]\\n});\\n\\nfunction RegistrationForm() {\\n const { register, handleSubmit, formState: { errors } } = useForm({\\n resolver: zodResolver(schema),\\n mode: \\"onBlur\\" // Validate fields when they lose focus\\n });\\n\\n return (\\n <form onSubmit={handleSubmit(onSubmit)}>\\n <div>\\n <label>Password</label>\\n <input type=\\"password\\" {...register(\\"password\\")} />\\n {errors.password && <p className=\\"error\\">{errors.password.message}</p>}\\n </div>\\n\\n <div>\\n <label>Confirm Password</label>\\n <input type=\\"password\\" {...register(\\"confirmPassword\\")} />\\n {errors.confirmPassword && <p className=\\"error\\">{errors.confirmPassword.message}</p>}\\n </div>\\n\\n {/* More fields */}\\n <button type=\\"submit\\">Register</button>\\n </form>\\n );\\n}\\n\\n
RHF integrates seamlessly with component libraries like Shadcn UI:
\\nimport { useForm } from \\"react-hook-form\\";\\nimport { zodResolver } from \\"@hookform/resolvers/zod\\";\\nimport * as z from \\"zod\\";\\nimport { Button } from \\"@/components/ui/button\\"; // shadcn/ui Button\\nimport {\\n Form,\\n FormField,\\n FormItem,\\n FormLabel,\\n FormControl,\\n FormMessage,\\n} from \\"@/components/ui/form\\"; // shadcn/ui Form components\\nimport { Input } from \\"@/components/ui/input\\"; // shadcn/ui Input\\nimport {\\n Select,\\n SelectTrigger,\\n SelectValue,\\n SelectContent,\\n SelectItem,\\n} from \\"@/components/ui/select\\"; // shadcn/ui Select components\\n\\n// Define schema (assumed missing in your example)\\nconst formSchema = z.object({\\n username: z.string().min(1, \\"Username is required\\"),\\n email: z.string().email(\\"Invalid email address\\"),\\n role: z.enum([\\"admin\\", \\"user\\", \\"editor\\"]),\\n isActive: z.boolean(),\\n});\\n\\nfunction UserForm() {\\n const form = useForm({\\n resolver: zodResolver(formSchema),\\n defaultValues: {\\n username: \\"\\",\\n email: \\"\\",\\n role: \\"user\\",\\n isActive: true,\\n },\\n });\\n\\n const onSubmit = (data) => {\\n console.log(\\"Form submitted:\\", data);\\n };\\n\\n return (\\n <Form {...form}>\\n <form onSubmit={form.handleSubmit(onSubmit)} className=\\"space-y-8\\">\\n <FormField\\n control={form.control}\\n name=\\"username\\"\\n render={({ field }) => (\\n <FormItem>\\n <FormLabel>Username</FormLabel>\\n <FormControl>\\n <Input placeholder=\\"johndoe\\" {...field} />\\n </FormControl>\\n <FormMessage />\\n </FormItem>\\n )}\\n />\\n\\n <FormField\\n control={form.control}\\n name=\\"role\\"\\n render={({ field }) => (\\n <FormItem>\\n <FormLabel>Role</FormLabel>\\n <Select onValueChange={field.onChange} defaultValue={field.value}>\\n <FormControl>\\n <SelectTrigger>\\n <SelectValue placeholder=\\"Select a role\\" />\\n </SelectTrigger>\\n </FormControl>\\n <SelectContent>\\n <SelectItem value=\\"admin\\">Admin</SelectItem>\\n <SelectItem value=\\"user\\">User</SelectItem>\\n <SelectItem value=\\"editor\\">Editor</SelectItem>\\n </SelectContent>\\n </Select>\\n <FormMessage />\\n </FormItem>\\n )}\\n />\\n\\n {/* Example additional field */}\\n <FormField\\n control={form.control}\\n name=\\"email\\"\\n render={({ field }) => (\\n <FormItem>\\n <FormLabel>Email</FormLabel>\\n <FormControl>\\n <Input placeholder=\\"[email protected]\\" {...field} />\\n </FormControl>\\n <FormMessage />\\n </FormItem>\\n )}\\n />\\n\\n <Button type=\\"submit\\">Submit</Button>\\n </form>\\n </Form>\\n );\\n}\\n\\nexport default UserForm;\\n\\n
Each of the examples above highlights and demonstrates different use cases where you might prefer using React Hook Form because it provides more substantial value beyond what React 19’s native form handling currently offers.
\\nWhen combining React Hook Form with React 19’s form Actions, it’s important to create a seamless integration between client-side validation and server-side processing. Here’s how to implement this pattern correctly:
\\nThe key to proper integration lies in using useFormState
for server state management while leveraging React Hook Form’s validation advantage. Here’s a straightforward implementation:
useFormState
Here’s how this works:
\\n\\"use client\\";\\nimport { zodResolver } from \\"@hookform/resolvers/zod\\";\\nimport { useFormState } from \\"react-dom\\";\\nimport { useRef } from \\"react\\";\\nimport { useForm } from \\"react-hook-form\\";\\nimport { Button } from \\"@/components/ui/button\\";\\nimport { Form, FormControl, FormField, FormItem, FormMessage } from \\"@/components/ui/form\\";\\nimport { Input } from \\"@/components/ui/input\\";\\nimport { z } from \\"zod\\";\\n\\n// 1. Define form schema with Zod\\nconst schema = z.object({\\n email: z.string().email(),\\n name: z.string().min(2),\\n});\\n\\n// 2. Define server action type\\ntype FormState = {\\n message: string;\\n fields?: Record<string, string>;\\n issues?: string[];\\n};\\n\\n// Server action (should be in a separate file with \\"use server\\")\\n// export async function formAction(prevState: FormState, formData: FormData): Promise<FormState> {\\n// const fields = Object.fromEntries(formData);\\n// if (!fields.email.includes(\\"@\\")) {\\n// return { message: \\"Invalid email\\", issues: [\\"Email must contain @\\"] };\\n// }\\n// return { message: \\"Success\\", fields };\\n// }\\n\\nexport function MyForm() {\\n // 3. Connect React Hook Form with server action\\n const [state, formAction] = useFormState(formAction, { message: \\"\\" });\\n const formRef = useRef<HTMLFormElement>(null);\\n\\n const form = useForm<z.infer<typeof schema>>({\\n resolver: zodResolver(schema),\\n defaultValues: {\\n email: state?.fields?.email || \\"\\",\\n name: state?.fields?.name || \\"\\",\\n },\\n });\\n\\n return (\\n <Form {...form}>\\n {/* Display server-side errors */}\\n {state?.issues && (\\n <div className=\\"text-red-500\\">\\n {state.issues.map((issue) => (\\n <div key={issue}>{issue}</div>\\n ))}\\n </div>\\n )}\\n\\n <form\\n ref={formRef}\\n action={formAction}\\n onSubmit={form.handleSubmit(() => {\\n formAction(new FormData(formRef.current!));\\n })}\\n >\\n <FormField\\n control={form.control}\\n name=\\"name\\"\\n render={({ field }) => (\\n <FormItem>\\n <FormControl>\\n <Input placeholder=\\"Name\\" {...field} />\\n </FormControl>\\n <FormMessage /> {/* Client-side errors */}\\n </FormItem>\\n )}\\n />\\n\\n <FormField\\n control={form.control}\\n name=\\"email\\"\\n render={({ field }) => (\\n <FormItem>\\n <FormControl>\\n <Input placeholder=\\"Email\\" {...field} />\\n </FormControl>\\n <FormMessage /> {/* Client-side errors */}\\n </FormItem>\\n )}\\n />\\n\\n <Button type=\\"submit\\">Submit</Button>\\n </form>\\n </Form>\\n );\\n}\\n\\n
In the code above, we put together two submission mechanisms to create a smooth validation flow from client to server. These mechanisms are React Hook Form’s client-side validation and submission, and React 19’s native form Actions (server-side processing).
\\nRemember when we highlighted that in React 19 form Actions, you won’t need preventDefault()
? Well, that’s when you will not be using RHF. If you are using RHF, we will need to use preventDefault()
to temporarily stop the native form submission that would normally happen immediately when the form is submitted:
onSubmit={(evt) => {\\n evt.preventDefault();\\n form.handleSubmit(() => {\\n formAction(new FormData(formRef.current!));\\n })(evt);\\n}}\\n\\n
The highlighted code above gives React Hook Form a chance to run its client-side validation first, which is handled by form.handleSubmit()
.
If the client-side validation passes — that is, all fields are valid according to our Zod schema — then the callback function inside handleSubmit()
runs, which manually triggers our server action with formAction(new FormData(...))
.
If validation fails, React Hook Form will display error messages, and the server action won’t be called at all.
\\nThe straight answer is no, React 19 doesn’t replace React Hook Form. This is because React 19 introduces new form handling features like useActionState
, useFormStatus
, and useOptimistic
, but React Hook Form remains a standalone library offering additional flexibility, validation, and performance optimizations not fully covered by React 19’s native tools.
Use React Hook Form when:
\\nIt depends on your needs. For simple forms, React 19’s native approach is likely sufficient. For complex forms with lots of validation, form libraries still provide a lot of value.
\\nThe good news is that React 19 gives you options. You can use its simplified approach for simpler forms while reserving more complex libraries for forms that need advanced validation and state management, as we have seen already.
\\nReact Hook Form is an excellent addition to the React open source ecosystem, significantly simplifying the creation and maintenance of forms. Its greatest strengths include its focus on the developer experience and its flexibility. It integrates seamlessly with state management libraries and works excellently in React Native. Until next time, stay safe and keep building more forms. Cheers ✌
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nMoveBefore()
\\n moveBefore()
API?\\n moveBefore()
API\\n The newly announced moveBefore()
API helps developers easily reposition DOM elements while preserving their state. This new API is particularly valuable for web applications with complex animations and more nuanced state management.
Chrome recently announced the moveBefore()
API. If this is your first time coming across this API, it just might be a game-changer.
When it comes to moving elements around your webpage, the DOM has traditionally been limited to removing and inserting primitives. For the past twenty years, whenever we as developers “move” elements within a webpage, what really happens behind the scenes is that we remove and then insert that element elsewhere.
\\nThe element also tends to lose its initial state. There is a workaround for this, but it’s a bit complicated for such a menial task. This is exactly why we have the moveBefore()
API.
In this article, we’ll discuss how DOM elements were moved previously, and what difference the moveBefore()
API brings. We will also look at the benefits of using moveBefore()
over more traditional methods, such as appendChild()or insertBefore()
. Feel free to clone this demo project to see moveBefore()
in action.
MoveBefore()
The moveBefore()
API is available on any DOM node, and its syntax looks like this:
parentNode.moveBefore(nodeToMove, referenceNode);\\n
Let’s break down the syntax above:
\\nparentNode
This is the destination where you want your element to end up. It must be a node capable of having children.
\\nExample: If you have <div id=\\"container2\\"></div>
, document.getElementById(\'container2\')
could be your parentNode
.
This is the element you’re relocating. It can already be in the DOM (attached to another parent) or detached (not currently in the DOM). Unlike older methods, moving it with moveBefore()
preserves its state.
Example: An <iframe id=\\"myIframe\\">
you want to shift from one container to another.
This specifies where nodeToMove
lands among parentNode\'s
children. It must be a direct child of parentNode
or null.
If it’s a child (e.g, <h3>
inside <div>
), nodeToMove
is inserted right before it. If it’s null, nodeToMove
goes to the end of parentNode\'s
child list (like appendChild
).
Example: If parentNode
has <h3>
and <p>
, passing the <p>
as referenceNode
places nodeToMove
between <h3>
and <p>
.
moveBefore()
appendChild
or insertBefore
, which remove and re-insert the node, moveBefore()
performs an “atomic” move. This means the node’s internal state stays intactinsertBefore()
– The syntax mirrors insertBefore
(node
, referenceNode
) for familiarity, but the behavior is differentparentNode
(and isn’t null), or if nodeToMove
can’t be moved (e.g, it’s an ancestor of parentNode
), it throws a DOMException
moveBefore()
API?To understand the why behind moveBefore()
, we need to understand how DOM manipulation actually works.
At its very core, DOM manipulation involves methods like appendChild()
, insertBefore()
, and removeChild()
. When you want to move an element – let’s say, shifting a <div>
from one parent to another – you typically remove it from its current location and reattach it elsewhere.
For example:
\\nconst element = document.querySelector(\\"#myElement\\");\\n const newParent = document.querySelector(\\"#newParent\\");\\n newParent.appendChild(element);\\n
The code above will detach myElement
from its original parent and append it to newParent
. Simple, right? But while this approach works for basic repositioning, it fails to maintain its ease for complex applications.
I can point out three major problems you may face with the previous pattern of moving, i.e, detaching and attaching in the real sense.
\\nLet’s consider an example of an element being detached and reattached. In this case, a CSS animation or iframe
content’s internal state will reset. For instance, a running CSS animation might restart from its initial keyframe, disrupting the user experience.
Moving elements by detaching and reattaching them will trigger reflows and repaints in the browser’s rendering engine. In a small DOM tree, this might be negligible. But in a large application, this operation can lead to jank, slowing down the interface.
\\n\\nIn order to preserve state or performance, we must write workarounds, storing input values in variables, pausing animations, or debouncing reflows. What should have been straightforward becomes bloated.
\\nmoveBefore()
APILet’s imagine you are designing a webpage for a course where users watch a video lecture while taking notes or viewing supplementary content. The video will be embedded in an <iframe>
, either from YouTube or Vimeo.
The interface has two major layouts:
\\nYou want to make users toggle between these layouts, and you want the video to keep playing without interruption as it moves between positions.
\\nIt would be unfair if the video restarts every time the user switches layouts. Just imagine losing your spot in a 20-minute lecture just because you opened the notes – that would be so annoying!
\\nUsing the old traditional appendChild()
DOM method, we’d implement it like so:
<!DOCTYPE html>\\n<html lang=\\"en\\">\\n<head>\\n <meta charset=\\"UTF-8\\">\\n <title>Beautiful Video Layout Toggle</title>\\n <style>\\n * {\\n box-sizing: border-box;\\n margin: 0;\\n padding: 0;\\n }\\n \\n body {\\n font-family: \'Segoe UI\', Tahoma, Geneva, Verdana, sans-serif;\\n background: #f7f9fc;\\n color: #333;\\n line-height: 1.6;\\n padding: 20px;\\n min-height: 100vh;\\n }\\n \\n .container {\\n max-width: 1200px;\\n margin: 0 auto;\\n padding: 20px;\\n }\\n \\n .header {\\n text-align: center;\\n padding: 20px 0 30px;\\n }\\n \\n .header h1 {\\n font-size: 2.5rem;\\n color: #2c3e50;\\n margin-bottom: 10px;\\n font-weight: 600;\\n }\\n \\n .header p {\\n color: #7f8c8d;\\n font-size: 1.1rem;\\n }\\n \\n #full-screen-container {\\n background: white;\\n border-radius: 12px;\\n box-shadow: 0 10px 30px rgba(0, 0, 0, 0.05);\\n padding: 30px;\\n margin-bottom: 30px;\\n transition: all 0.3s ease;\\n }\\n \\n #split-screen-container {\\n display: none;\\n width: calc(65% - 15px);\\n float: left;\\n background: white;\\n border-radius: 12px;\\n box-shadow: 0 10px 30px rgba(0, 0, 0, 0.05);\\n padding: 30px;\\n margin-right: 15px;\\n transition: all 0.3s ease;\\n }\\n \\n #notes-container {\\n display: none;\\n width: calc(35% - 15px);\\n float: right;\\n background: white;\\n border-radius: 12px;\\n box-shadow: 0 10px 30px rgba(0, 0, 0, 0.05);\\n padding: 30px;\\n margin-left: 15px;\\n transition: all 0.3s ease;\\n }\\n \\n h3 {\\n color: #2c3e50;\\n margin-bottom: 20px;\\n font-weight: 500;\\n font-size: 1.5rem;\\n }\\n \\n .video-wrapper {\\n position: relative;\\n padding-bottom: 10px;\\n text-align: center;\\n }\\n \\n iframe {\\n border: none;\\n border-radius: 8px;\\n box-shadow: 0 5px 15px rgba(0, 0, 0, 0.08);\\n max-width: 100%;\\n transition: all 0.3s ease;\\n }\\n \\n textarea {\\n width: 100%;\\n min-height: 300px;\\n padding: 15px;\\n border: 1px solid #e0e0e0;\\n border-radius: 8px;\\n font-family: inherit;\\n font-size: 1rem;\\n resize: vertical;\\n transition: all 0.3s ease;\\n }\\n \\n textarea:focus {\\n outline: none;\\n border-color: #3498db;\\n box-shadow: 0 0 0 2px rgba(52, 152, 219, 0.2);\\n }\\n \\n .toggle-button {\\n background: #3498db;\\n color: white;\\n border: none;\\n padding: 12px 24px;\\n font-size: 1rem;\\n font-weight: 500;\\n border-radius: 6px;\\n cursor: pointer;\\n margin: 20px auto;\\n display: block;\\n transition: all 0.2s ease;\\n box-shadow: 0 4px 6px rgba(52, 152, 219, 0.2);\\n }\\n \\n .toggle-button:hover {\\n background: #2980b9;\\n transform: translateY(-2px);\\n box-shadow: 0 6px 8px rgba(52, 152, 219, 0.25);\\n }\\n \\n .toggle-button:active {\\n transform: translateY(0);\\n }\\n \\n .clearfix::after {\\n content: \\"\\";\\n display: table;\\n clear: both;\\n }\\n \\n .footer {\\n text-align: center;\\n margin-top: 40px;\\n color: #7f8c8d;\\n font-size: 0.9rem;\\n }\\n \\n @media (max-width: 768px) {\\n #split-screen-container, #notes-container {\\n width: 100%;\\n float: none;\\n margin: 0 0 20px 0;\\n }\\n }\\n </style>\\n</head>\\n<body>\\n <div class=\\"container\\">\\n <div class=\\"header\\">\\n <h1>Interactive Video Experience</h1>\\n <p>Toggle between full screen and note-taking modes</p>\\n </div>\\n \\n <div id=\\"full-screen-container\\">\\n <h3>Video Presentation</h3>\\n <div class=\\"video-wrapper\\">\\n <iframe id=\\"video\\" src=\\"https://www.youtube.com/embed/Ki_0iES2cGI?autoplay=1\\" width=\\"800\\" height=\\"450\\" allowfullscreen></iframe>\\n </div>\\n </div>\\n \\n <div id=\\"split-screen-container\\" class=\\"clearfix\\">\\n <h3>Video Presentation</h3>\\n <div class=\\"video-wrapper\\">\\n <!-- Video will be moved here --\x3e\\n </div>\\n </div>\\n \\n <div id=\\"notes-container\\" class=\\"clearfix\\">\\n <h3>Your Notes</h3>\\n <textarea placeholder=\\"Take notes as you watch the video...\\"></textarea>\\n </div>\\n \\n <button class=\\"toggle-button\\" onclick=\\"toggleLayout()\\">Toggle Layout</button>\\n \\n <div class=\\"footer\\">\\n <p>(c) 2025 Interactive Learning Platform</p>\\n </div>\\n </div>\\n\\n <script>\\n const videoIframe = document.getElementById(\'video\');\\n const fullScreenContainer = document.getElementById(\'full-screen-container\');\\n const splitScreenContainer = document.getElementById(\'split-screen-container\');\\n const notesContainer = document.getElementById(\'notes-container\');\\n const splitVideoWrapper = splitScreenContainer.querySelector(\'.video-wrapper\');\\n let isFullScreen = true;\\n\\n function toggleLayout() {\\n if (isFullScreen) {\\n // Switch to split-screen\\n fullScreenContainer.style.display = \'none\';\\n splitScreenContainer.style.display = \'block\';\\n notesContainer.style.display = \'block\';\\n videoIframe.width = \'400\';\\n videoIframe.height = \'225\';\\n // Use appendChild: adds iframe to split-screen-container\\n splitVideoWrapper.appendChild(videoIframe);\\n } else {\\n // Switch to full-screen\\n fullScreenContainer.style.display = \'block\';\\n splitScreenContainer.style.display = \'none\';\\n notesContainer.style.display = \'none\';\\n videoIframe.width = \'800\';\\n videoIframe.height = \'450\';\\n // Use insertBefore: places iframe into the video-wrapper in full-screen-container\\n const fullVideoWrapper = fullScreenContainer.querySelector(\'.video-wrapper\');\\n fullVideoWrapper.appendChild(videoIframe);\\n }\\n isFullScreen = !isFullScreen;\\n }\\n </script>\\n</body>\\n</html>\\n
We can see above that the iframe
in question moves, but it loses its state. In this case, you will need an extra code workaround to enable this work.
But with the introduction of moveBefore()
, we no longer need workarounds for something so basic:
<!DOCTYPE html>\\n<html lang=\\"en\\">\\n<head>\\n <meta charset=\\"UTF-8\\">\\n <title>Beautiful Video Experience</title>\\n <style>\\n * {\\n box-sizing: border-box;\\n margin: 0;\\n padding: 0;\\n }\\n \\n body {\\n font-family: \'Inter\', -apple-system, BlinkMacSystemFont, \'Segoe UI\', Roboto, Oxygen, Ubuntu, Cantarell, sans-serif;\\n background: linear-gradient(135deg, #f5f7fa 0%, #ebf0f6 100%);\\n color: #333;\\n line-height: 1.6;\\n min-height: 100vh;\\n padding: 30px;\\n }\\n \\n .container {\\n max-width: 1200px;\\n margin: 0 auto;\\n }\\n \\n .header {\\n text-align: center;\\n margin-bottom: 40px;\\n }\\n \\n .header h1 {\\n font-size: 2.4rem;\\n font-weight: 700;\\n color: #1a365d;\\n margin-bottom: 10px;\\n letter-spacing: -0.5px;\\n }\\n \\n .header p {\\n color: #4a5568;\\n font-size: 1.1rem;\\n }\\n \\n #full-screen-container {\\n background: white;\\n border-radius: 16px;\\n box-shadow: 0 10px 25px rgba(0, 0, 0, 0.05);\\n padding: 30px;\\n margin-bottom: 30px;\\n text-align: center;\\n overflow: hidden;\\n transition: all 0.3s ease;\\n }\\n \\n #split-screen-container {\\n display: none;\\n width: calc(60% - 15px);\\n float: left;\\n background: white;\\n border-radius: 16px;\\n box-shadow: 0 10px 25px rgba(0, 0, 0, 0.05);\\n padding: 30px;\\n margin-right: 15px;\\n transition: all 0.3s ease;\\n }\\n \\n #notes-container {\\n display: none;\\n width: calc(40% - 15px);\\n float: right;\\n background: white;\\n border-radius: 16px;\\n box-shadow: 0 10px 25px rgba(0, 0, 0, 0.05);\\n padding: 30px;\\n margin-left: 15px;\\n transition: all 0.3s ease;\\n }\\n \\n h2 {\\n color: #2d3748;\\n margin-bottom: 20px;\\n font-weight: 600;\\n font-size: 1.5rem;\\n }\\n \\n iframe {\\n border: none;\\n border-radius: 12px;\\n box-shadow: 0 6px 16px rgba(0, 0, 0, 0.1);\\n max-width: 100%;\\n transition: all 0.4s ease;\\n }\\n \\n textarea {\\n width: 100%;\\n min-height: 330px;\\n padding: 16px;\\n border: 1px solid #e2e8f0;\\n border-radius: 8px;\\n background-color: #f8fafc;\\n font-family: inherit;\\n font-size: 1rem;\\n line-height: 1.6;\\n resize: vertical;\\n transition: all 0.3s ease;\\n color: #2d3748;\\n }\\n \\n textarea:focus {\\n outline: none;\\n border-color: #4299e1;\\n box-shadow: 0 0 0 3px rgba(66, 153, 225, 0.15);\\n background-color: #fff;\\n }\\n \\n textarea::placeholder {\\n color: #a0aec0;\\n }\\n \\n .button-container {\\n text-align: center;\\n margin: 30px 0;\\n clear: both;\\n }\\n \\n .toggle-button {\\n background: #4299e1;\\n color: white;\\n border: none;\\n padding: 14px 28px;\\n font-size: 1rem;\\n font-weight: 500;\\n border-radius: 8px;\\n cursor: pointer;\\n transition: all 0.2s ease;\\n box-shadow: 0 4px 6px rgba(66, 153, 225, 0.2);\\n display: inline-flex;\\n align-items: center;\\n justify-content: center;\\n }\\n \\n .toggle-button:hover {\\n background: #3182ce;\\n transform: translateY(-2px);\\n box-shadow: 0 7px 10px rgba(66, 153, 225, 0.25);\\n }\\n \\n .toggle-button:active {\\n transform: translateY(0);\\n box-shadow: 0 4px 6px rgba(66, 153, 225, 0.2);\\n }\\n \\n .toggle-button svg {\\n margin-right: 10px;\\n }\\n \\n .status-badge {\\n display: inline-block;\\n margin-left: 15px;\\n font-size: 0.85rem;\\n padding: 5px 10px;\\n border-radius: 20px;\\n background-color: #edf2f7;\\n color: #4a5568;\\n }\\n \\n .video-container {\\n position: relative;\\n text-align: center;\\n margin: 0 auto;\\n }\\n \\n .clearfix::after {\\n content: \\"\\";\\n display: table;\\n clear: both;\\n }\\n \\n .footer {\\n text-align: center;\\n margin-top: 50px;\\n color: #718096;\\n font-size: 0.9rem;\\n padding: 20px 0;\\n }\\n \\n @media (max-width: 900px) {\\n body {\\n padding: 15px;\\n }\\n \\n .header h1 {\\n font-size: 2rem;\\n }\\n \\n #split-screen-container, #notes-container {\\n width: 100%;\\n float: none;\\n margin: 0 0 20px 0;\\n }\\n \\n iframe {\\n width: 100% !important;\\n height: auto !important;\\n aspect-ratio: 16/9;\\n }\\n }\\n </style>\\n</head>\\n<body>\\n <div class=\\"container\\">\\n <div class=\\"header\\">\\n <h1>Seamless Video Experience</h1>\\n <p>Toggle between cinematic view and note-taking mode</p>\\n </div>\\n \\n <div id=\\"full-screen-container\\">\\n <div class=\\"video-container\\">\\n <iframe id=\\"video\\" src=\\"https://www.youtube.com/embed/Ki_0iES2cGI?autoplay=1\\" width=\\"800\\" height=\\"450\\" allowfullscreen></iframe>\\n </div>\\n </div>\\n \\n <div id=\\"split-screen-container\\" class=\\"clearfix\\"></div>\\n \\n <div id=\\"notes-container\\" class=\\"clearfix\\">\\n <h2>Notes</h2>\\n <textarea placeholder=\\"Take notes as you watch the video...\\n \\n• Write down key points\\n• Questions to research later\\n• Your thoughts and observations\\n• Important timestamps to revisit\\"></textarea>\\n </div>\\n \\n <div class=\\"button-container\\">\\n <button class=\\"toggle-button\\" onclick=\\"toggleLayout()\\">\\n <svg xmlns=\\"http://www.w3.org/2000/svg\\" width=\\"16\\" height=\\"16\\" viewBox=\\"0 0 24 24\\" fill=\\"none\\" stroke=\\"currentColor\\" stroke-width=\\"2\\" stroke-linecap=\\"round\\" stroke-linejoin=\\"round\\">\\n <rect x=\\"2\\" y=\\"3\\" width=\\"20\\" height=\\"14\\" rx=\\"2\\" ry=\\"2\\"></rect>\\n <line x1=\\"8\\" y1=\\"21\\" x2=\\"16\\" y2=\\"21\\"></line>\\n <line x1=\\"12\\" y1=\\"17\\" x2=\\"12\\" y2=\\"21\\"></line>\\n </svg>\\n Toggle Layout\\n </button>\\n <span class=\\"status-badge\\" id=\\"tech-badge\\">\\n Using <span id=\\"tech-type\\">standard DOM</span>\\n </span>\\n </div>\\n \\n <div class=\\"footer\\">\\n <p>(c) 2025 Interactive Learning Platform • Powered by moveBefore API</p>\\n </div>\\n </div>\\n\\n <script>\\n const videoIframe = document.getElementById(\'video\');\\n const fullScreenContainer = document.getElementById(\'full-screen-container\');\\n const splitScreenContainer = document.getElementById(\'split-screen-container\');\\n const notesContainer = document.getElementById(\'notes-container\');\\n const techBadge = document.getElementById(\'tech-badge\');\\n const techType = document.getElementById(\'tech-type\');\\n let isFullScreen = true;\\n \\n // Check if moveBefore is supported\\n if (\'moveBefore\' in Element.prototype) {\\n techType.textContent = \'moveBefore API\';\\n techBadge.style.backgroundColor = \'#c6f6d5\';\\n techBadge.style.color = \'#276749\';\\n }\\n\\n function toggleLayout() {\\n if (isFullScreen) {\\n // Switch to split-screen\\n fullScreenContainer.style.display = \'none\';\\n splitScreenContainer.style.display = \'block\';\\n notesContainer.style.display = \'block\';\\n videoIframe.width = \'400\';\\n videoIframe.height = \'225\';\\n \\n if (\'moveBefore\' in Element.prototype) {\\n splitScreenContainer.moveBefore(videoIframe, null);\\n } else {\\n splitScreenContainer.appendChild(videoIframe);\\n }\\n } else {\\n // Switch to full-screen\\n fullScreenContainer.style.display = \'block\';\\n splitScreenContainer.style.display = \'none\';\\n notesContainer.style.display = \'none\';\\n videoIframe.width = \'800\';\\n videoIframe.height = \'450\';\\n \\n if (\'moveBefore\' in Element.prototype) {\\n fullScreenContainer.moveBefore(videoIframe, null);\\n } else {\\n fullScreenContainer.appendChild(videoIframe);\\n }\\n }\\n isFullScreen = !isFullScreen;\\n }\\n </script>\\n</body>\\n</html>\\n
In the GIF above, we can see how seamless it is.
\\nAs of April 2025, moveBefore()
is supported in Chrome 133+. Safari and Firefox have expressed interest, but we are still unable to use the moveBefore()
API in those browsers.
This is a drawback for the API, so I advise employing a fallback:
\\nif (\\"moveBefore\\" in Element.prototype) {\\n // Supported\\n } else {\\n // Fallback to appendChild or insertBefore\\n }\\n\\n
In this article, we examined in detail how to use the moveBefore()
API. We’ve seen its beauty and the positive effects it brings to a unique aspect of software development.
Though it is yet to be introduced to other browsers, I’d predict we’ll be using this in Safari a few months from now.
\\nThank you for hanging by; feel free to talk about other ways we could utilize this new API in the comments. Keep coding, my friends!
\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nGitHub Pages offers a simple and free way to host static websites, making it an excellent option for deploying React applications. This guide will walk you through the process of deploying a Create React App project to GitHub Pages, customizing your domain, and automating deployments with GitHub Actions.
\\nIf you already have a Create React App project and GitHub repository set up, you can deploy your app with these three quick steps:
\\nnpm install gh-pages --save-dev\\n\\n
{\\n \\"homepage\\": \\"https://yourusername.github.io/your-repo-name\\",\\n \\"scripts\\": {\\n \\"predeploy\\": \\"npm run build\\",\\n \\"deploy\\": \\"gh-pages -d build\\",\\n // other scripts...\\n }\\n}\\n\\n
npm run deploy\\n\\n
That’s it! Your app will be available at https://yourusername.github.io/your-repo-name
.
Note: This guide focuses on Create React App projects. If you’re using Vite, Next.js, or another framework, the deployment process will be different. For Vite projects, check the official Vite deployment guide. For Next.js, refer to the Next.js deployment documentation.
\\nFor a complete walkthrough, troubleshooting common issues, and advanced deployment options, continue reading below.
\\nTo follow along, you should have:
\\nEditor’s note: This blog was updated by Ikeh Akinyemi in April 2025 to include quickstart directions, narrow the focus to only apps built in Create React App, and address common GitHub Pages errors such as blank pages and 404s on refresh.
\\nGitHub Pages is a service from GitHub that enables you to add HTML, JavaScript, and CSS files to a repository and create a hosted static website.
\\nThe website can be hosted on GitHub’s github.io domain (e.g., https://username.github.io/repositoryname) or your custom domain. A React app can be similarly hosted on GitHub Pages.
\\nTo deploy your React application to GitHub Pages from scratch, follow these steps:
\\nFor a working example of this entire process, check out the react-gh-pages-example repository, which provides the source code you will see later in this article.
\\nLet’s get started by creating a new React application. For this tutorial, we’ll be using create-react-app
with React 18 and Node 20+.
Open the terminal on your computer and navigate to your preferred directory:
\\ncd desktop \\n\\n
Create a React application using create-react-app:
\\nnpx create-react-app \\"your-project-name\\"\\n\\n
In just a few minutes, create-react-app will have finished setting up a new React application! Now, let’s navigate into the newly created React app project directory:
\\ncd \\"your-project-name\\"\\n\\n
The next step is to create a GitHub repository to store our project’s files and revisions. In your GitHub account, click the + icon in the top right and follow the prompts to set up a new repository:
\\nAfter your repository has been successfully created, you should see a page with commands to push your existing repository:
\\nNow, initialize Git in your project:
\\ngit init\\n\\n
Add your files, commit, and push to your GitHub repository:
\\ngit add .\\ngit commit -m \\"first commit\\"\\ngit branch -M main\\ngit remote add origin https://github.com/yourusername/your-repo-name.git\\ngit push -u origin main\\n\\n
Next, we’ll install the gh-pages
package (version 6.0.0 or later) in our project. This package allows us to publish build files into a gh-pages
branch on GitHub, where they can then be hosted:
npm install gh-pages --save-dev\\n\\n
Now, let’s configure the package.json
file to point our GitHub repository to the location where our React app will be deployed.
Add a homepage
property that follows this structure: http://{github-username}.github.io/{repo-name}
. For example:
{\\n \\"name\\": \\"your-project-name\\",\\n \\"version\\": \\"0.1.0\\",\\n \\"homepage\\": \\"https://yourusername.github.io/your-repo-name\\",\\n // ...\\n}\\n\\n
Now, add the predeploy
and deploy
scripts to the scripts
section of your package.json
:
{\\n // ...\\n \\"scripts\\": {\\n // ...\\n \\"predeploy\\": \\"npm run build\\",\\n \\"deploy\\": \\"gh-pages -d build\\"\\n },\\n // ...\\n}\\n\\n
The predeploy
script will build your React app, and the deploy
script will publish the build folder to the gh-pages branch of your GitHub repository.
Now that everything is set up, commit your changes and push them to your GitHub repository:
\\ngit add .\\ngit commit -m \\"setup gh-pages\\"\\ngit push\\n\\n
Finally, deploy your React application by running:
\\nnpm run deploy\\n\\n
This command will create a bundled version of your React application and push it to a gh-pages
branch in your remote repository on GitHub.
To view your deployed React application, navigate to the Settings tab in your GitHub repository, click on the Pages menu, and you should see a link to the deployed app:
\\nWhen deploying React apps to GitHub Pages, you might encounter some common issues. Here’s how to diagnose and fix them:
\\nIf you see a blank page instead of your app:
\\n1. Check your homepage URL — Ensure your homepage
property in package.json
exactly matches your GitHub Pages URL format (https://username.github.io/repo-name
)
2. Check browser console for errors — Open the developer tools in your browser to see if there are any errors related to failed resource loading
\\n3. Verify file paths — If you’re using relative paths for assets, they might break in production. Update them to use the PUBLIC_URL
environment variable:
<img src={`${process.env.PUBLIC_URL}/logo.png} alt=\\"Logo\\" />\\n
If your app works on the homepage but shows a 404 when refreshing or accessing direct routes:
\\n1. Use HashRouter instead of BrowserRouter — GitHub Pages doesn’t support the browser history API. Update your index.js
file to use HashRouter
:
import { HashRouter as Router } from \\"react-router-dom\\";root.render();\\n
This will change your URLs from /about
to /#/about
but will resolve the 404 issues.
2. Create a 404.html redirect — Alternatively, you can add a custom 404.html
file that redirects to your index.html
with the original URL parameters preserved.
If images, fonts, or other assets aren’t loading:
\\nCommon repository configuration issues include:
\\nIf your deployment fails during the build process:
\\nnode_modules
folder and package-lock.json
file, then run npm install
againWe can deploy our React app to GitHub’s domain for free, but GitHub Pages also supports custom subdomains and apex domains. Here are examples showing what each type of subdomain looks like:
\\nSupported custom domain | \\nExample | \\n
---|---|
www subdomain | \\nwww.logdeploy.com | \\n
Custom subdomain | \\napp.logdeploy.com | \\n
Apex domain | \\nlogdeploy.com | \\n
Right now, if we navigate to https://yourusername.github.io/your-repo-name/
, we’ll see our recently published website. But we could also use a custom subdomain or an apex domain instead. Here are the steps to set those up:
CNAME
file at the root of your repository:CNAME
record on your domain service provider points to the GitHub URL of the deployed website (in this case, yourusername.github.io). To do so, navigate to the DNS management page of the domain service provider and add a CNAME
record that points to username.github.io where username is your GitHub usernameTo deploy to an apex domain, follow the first two steps above for deploying to a custom subdomain, but substitute the third step with the following:
\\nALIAS
record or ANAME
record that points your apex domain to your GitHub Pages IP addresses, as shown:If you’ve previously deployed a React app that uses React Router for routing to Netlify, you’re aware that you need to configure redirects for your URLs. Without redirects, users will get a 404 error when they try to navigate to different parts of your application.
\\nNetlify makes it simple to configure redirects and rewrite rules for your URLs. All you need to do is create a file called _redirects
(without any extensions) in the app’s public folder.
Then, simply add the following rewrite rule within the file:
\\n/* /index.html 200\\n\\n
No matter what URL the browser requests, this rewrite rule will deliver the index.html
file instead of returning a 404.
If we want to handle page routing when we deploy to GitHub Pages, we’ll need to do something similar. Let’s configure routing for our previously deployed project.
\\nFirst, we need to install a router. Start by installing React Router in the project directory, like so:
\\nnpm install react-router-dom\\n\\n
Then, follow the next steps.
\\nHashRouter
to the application to enable client-side routing:import React from ‘react’;import ReactDOM from \'react-dom/client\';\\nimport \'./index.css\';\\nimport App from \'./App\';\\nimport reportWebVitals from \'./reportWebVitals\';\\nimport { HashRouter as Router } from \\"react-router-dom\\";\\nconst root = ReactDOM.createRoot(document.getElementById(\'root\'));\\nroot.render();\\n// If you want to start measuring performance in your app, pass a function\\n// to log results (for example, reportWebVitals(console.log))\\n// or send to an analytics endpoint. Learn more: https://bit.ly/CRA-vitals\\nreportWebVitals();\\n
Our index.js
file should look like the above code block. Because GitHub Pages does not support browser history, we’re employing a HashRouter
. Our existing path does not assist GitHub Pages in determining where to direct the user (because it is a frontend route).
To solve this issue, we must replace our application’s browser router with a HashRouter
. The hash component of the URL is used by this router to keep the UI in sync with the URL.
2. Create the routes:
\\nCreate a routes
folder and the required routes. These routes can be configured in the app.js
file. But first, let’s create a Navbar
component that can be visible on all pages.
Navbar
component:import { Link } from \\"react-router-dom\\"\\nconst Navbar = () => {\\n return (\\n <div>\\n <Link to=\\"/\\">Home</Link>\\n <Link to=\\"/about\\">About</Link>\\n <Link to=\\"/careers\\">Careers</Link>\\n </div>\\n )\\n}\\nexport default Navbar;\\n
Now we can add the Navbar
component alongside the configured routes in the app.js
file.
app.js
file:import \'./App.css\';\\nimport { Routes, Route} from \\"react-router-dom\\";\\nimport About from \\"./routes/About\\";\\nimport Careers from \\"./routes/Careers\\";\\nimport Home from \\"./routes/Home\\";\\nimport Navbar from \'./Navbar\';\\nfunction App() {\\n return (\\n <>\\n \\n \\n } />\\n } />\\n } />\\n \\n </>\\n );\\n}\\nexport default App;\\n
Now that we’re done with the setup, let’s push our code, like so:
\\ngit add .\\ngit commit -m \\"setup gh-pages\\"\\ngit push\\n\\n
Next, we simply deploy, and our app should route properly:
\\nnpm run deploy\\n\\n
Once these steps are completed, our deployed application will correctly route the user to any part of the application they desire.
\\nWhen we configure Netlify deployments, we’re given a preview link to view our deployment before it is merged into the main branch. Let’s create the same for GitHub Pages.
\\nWe’ll use a simple tool called Livecycle for this, which saves us the trouble of having to do this using GitHub Actions. Every time we make a pull request in our repository, Livecycle assists us in creating a preview environment.
\\nTo create a preview environment, follow these steps:
\\ncreate-react-app
:Once deployment is successful, anytime we make a PR or push a commit to that PR, we’ll get a preview link.
\\nWe would proceed to integrate GitHub Actions for automated deployments. This streamlines the deployment process and enhances efficiency. Before we can deploy the app using this approach, we’ll create a workflow file in .github/workflows/deploy.yml
:
name: Deploy to GitHub Pages\\n\\non:\\n push:\\n branches:\\n - main\\n workflow_dispatch:\\n\\npermissions:\\n contents: write\\n pages: write\\n id-token: write\\n\\njobs:\\n build-and-deploy:\\n runs-on: ubuntu-latest\\n steps:\\n - name: Checkout Repository\\n uses: actions/checkout@v4\\n\\n - name: Setup Node\\n uses: actions/setup-node@v4\\n with:\\n node-version: \\"20\\"\\n cache: \\"npm\\"\\n\\n - name: Install Dependencies\\n run: npm ci\\n\\n - name: Build\\n run: npm run build\\n\\n - name: Deploy\\n uses: JamesIves/github-pages-deploy-action@v4\\n with:\\n folder: build\\n branch: gh-pages\\n\\n
This workflow file is used by GitHub Actions to determine how to run the jobs. The workflow does the following:
\\nNext, run this:
\\ngit add . && git commit -m \\"Adds Github Actions\\" && git push\\n\\n
This will start the pipeline and proceed to deploy the built files to pages:
\\nIf you get an error: Branch \\"main\\" is not allowed to deploy to github-pages due to environment protection rules
, follow these steps:
This should allow deployments from the main branch to your github-pages environment:
\\nAfter adding this, retry the deploy stage, and this time it should pass successfully:
\\nSometimes, we need to add sensitive data to applications. But to do it securely, we don’t want to hardcode it directly into the application. Let’s see how to do this with the help of GitHub Actions.
\\nTo demonstrate this, we’ll update the About.js
pages to include an environment variable, add a React environment variable to GitHub as a secret, and finally add this sensitive data securely to the workflow file:
To add a new secret to GitHub, go to:
\\nCreate New repository secret
buttonREACT_APP_API_KEY
as the secret name, and 12345
as the secret valueAdd Secret
button:Next, modify the workflow file to include this environment variable. This enables the variable REACT_APP_API_KEY
to be available to pages within the app:
name: Deploy to GitHub Pages\\n\\non:\\n push:\\n branches:\\n - main\\n workflow_dispatch:\\n\\npermissions:\\n contents: write\\n pages: write\\n id-token: write\\n\\njobs:\\n build-and-deploy:\\n runs-on: ubuntu-latest\\n steps:\\n - name: Checkout Repository\\n uses: actions/checkout@v4\\n\\n - name: Setup Node\\n uses: actions/setup-node@v4\\n with:\\n node-version: \\"20\\"\\n cache: \\"npm\\"\\n\\n - name: Install Dependencies\\n run: npm ci\\n\\n - name: Build\\n run: npm run build\\n env:\\n REACT_APP_API_KEY: ${{ secrets.REACT_APP_API_KEY }}\\n\\n - name: Deploy\\n uses: JamesIves/github-pages-deploy-action@v4\\n with:\\n folder: build\\n branch: gh-pages\\n\\n
Finally, update the About.js
page to this:
const apiKey = process.env.REACT_APP_API_KEY;\\nconst About = () => {\\n return <h1>About page 1.0, Secret: {apiKey}</h1>;\\n};\\nexport default About;\\n\\n
Once deployed, we can see the environment variable displayed on the page:
\\nGitHub Pages offers a simple, free hosting solution that makes it an ideal choice for developers at all experience levels.
\\nThroughout this guide, we’ve discussed the complete process of deploying Create React App projects to GitHub Pages, from basic setup and configuration to advanced techniques like custom domains, routing solutions, and automated deployments with GitHub Actions. We’ve also addressed common pitfalls you might encounter and provided solutions to ensure a smooth deployment experience.
\\nIf you’re looking for an easy, cost-effective way to share your React applications with the world, GitHub Pages provides excellent integration with your existing GitHub workflow, making it particularly well-suited for open-source projects and personal portfolios.
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nIn this article, we’ll introduce Float UI, a set of pre-made templates that leverage the power of Tailwind CSS to help developers create professional websites quickly. We’ll also walk through a tutorial for putting it to use.
\\nIntroducing Float UI, your savior. As the website states, Float UI is “a collection of modern UI components and website templates, built on top of React/Next.js with Tailwind CSS. The components are beautifully designed, expertly crafted, and allow you to build beautiful websites.”
\\nIt’s free, open source, and has pretty much everything a modern website needs – from fully-fledged components to useful parts like modals, navbars, and tabs.
\\nFloat UI integrates natively with React and Next.js, so many developers are already familiar with it. Since it uses Tailwind CSS, it provides a fast way to build websites with ready-made components while remaining highly customizable; you can modify whatever you want in the code since it’s just Tailwind CSS at its core.
\\nAnd since it’s open source, it is license-free, meaning any organization of any size can use it for free.
\\nFloat UI provides developers with ready-made templates written in Tailwind CSS. To use it, you need a React/Next.js app with Tailwind CSS installed. After that, you can simply grab any component you like, tweak it to your liking, and use it.
\\nOne important point to emphasize is that Float UI components are fully responsive, which removes a significant burden from developers’ shoulders.
\\nOn the left side of the Float UI page, components are categorized into two main sections: Marketing UI and Application UI.
\\nThe Marketing UI section consists of landing pages and business-style components like Hero sections, FAQs, and Testimonials, while Application UI focuses more on usability elements such as cards, pagination, modals, and menus. It’s entirely possible to build a fully functional web app using Float UI.
\\nOf course, you’ll need to write your own logic, but the UI is taken care of.
\\nThey also provide fully-built templates, which are free at the time of writing. These templates are well-made, can be previewed as they are already deployed on Vercel, and can significantly speed up the development process.
\\nWe will build a fully functional business page for a hypothetical coffee brewery called SmartBrew. For this, I ask you to imagine the following scenario:
\\nYou’re a freelance developer or an agency owner. One of your clients (SmartBrew) has asked you to build their website. It’s urgent—they need it today, and they want you to come up with a good business page. You don’t have enough time to code everything from scratch, but you also can’t use any other template builder.
\\nThe client insists on using Tailwind CSS and React because their internal developer, Greg, only works with Tailwind. Greg is on holiday, so the client has turned to you to develop their page. You want to retain this client, so you must be fast and efficient. You need to build a good business page today. You’re already behind schedule. What do you do?
\\nYou decide to choose the second option.
\\n\\nTo build the app, we will do the following:
\\nYou can check out the preview of what we’re going to build before we dive in.
\\nLet’s start by creating our React app using the npm create vite@latest
command and selecting the appropriate options. For this tutorial, we’ll use JavaScript, not TypeScript, so choose accordingly.
Next, follow this guide to install Tailwind CSS:
\\nAfter completing the installation, delete everything inside the index.css
file and add the following:
import \\"tailwindcss\\";\\n
We will also delete everything in the App.css
file. Since we are using Float UI, we won’t be utilizing CSS files at all.
When developing React applications, it is best to use separate components for different sections of the web app instead of placing everything in a single file. We will create a components
folder to organize our components. Our folder structure will look like this:
- main.jsx\\n- App.jsx\\n- components\\n - Hero.jsx\\n - Features.jsx\\n - CTA.jsx\\n - Testimonials.jsx\\n - FAQ.jsx\\n - Contact.jsx\\n - Footer.jsx\\n
Each file inside the components
folder will correspond to a specific section, and we will import them into our App.jsx
file as follows:
import Hero from \\"./components/Hero.jsx\\";\\nimport Features from \\"./components/Features.jsx\\";\\nimport CTA from \\"./components/CTA.jsx\\";\\nimport Testimonials from \\"./components/Testimonials.jsx\\";\\nimport FAQ from \\"./components/FAQ.jsx\\";\\nimport Contact from \\"./components/Contact.jsx\\";\\nimport Footer from \\"./components/Footer.jsx\\";\\n\\nfunction App() {\\n return (\\n <>\\n <Hero />\\n <Features />\\n <CTA />\\n <Testimonials />\\n <FAQ />\\n <Contact />\\n <Footer />\\n </>\\n );\\n}\\n\\nexport default App;\\n\\n
If you’re following along with this tutorial, I recommend commenting out the imports for now and adding them as we go. At this stage, the app won’t compile since the files don’t exist yet.
\\nNow that we understand the app’s structure, we can proceed with confidence. We’ll start by creating the Hero section, which serves as the first impression for users.
\\nSince we’re short on time and using Float UI, we don’t need to write the entire section from scratch. Instead, we’ll visit the components page on the Float UI website. This section falls under the Marketing UI category.
\\nWe’ll select the Hero template called Secondary Hero Section:
Now, if we click on the Code section on the top right, we will have the template ready:
We will make some modifications to the template. Since our client is in the coffee business, brownish tones will be more suitable. Additionally, we’ll adjust the text and add an image.
\\n\\nPro tip: It’s 2025, and I highly recommend integrating AI into your development workflow—for example, to generate royalty-free images or logos. The logo and image used here were created with AI.
\\nNow we’ll generate our hero section:
Simple and beautiful, isn’t it?
\\nimport React from \\"react\\";\\nimport { useState } from \\"react\\";\\n\\nconst Hero = () => {\\n const [state, setState] = useState(false);\\n\\n // Updated navigation items for SmartBrew\\n const navigation = [\\n { title: \\"Menu\\", path: \\"javascript:void(0)\\" },\\n { title: \\"Locations\\", path: \\"javascript:void(0)\\" },\\n { title: \\"Rewards\\", path: \\"javascript:void(0)\\" },\\n { title: \\"Baristas\\", path: \\"javascript:void(0)\\" },\\n { title: \\"About Us\\", path: \\"javascript:void(0)\\" },\\n ];\\n\\n return (\\n <>\\n <header>\\n <nav className=\\"relative items-center pt-5 px-4 mx-auto max-w-screen-xl sm:px-8 sm:flex sm:space-x-6\\">\\n <div className=\\"flex justify-between\\">\\n <a href=\\"javascript:void(0)\\">\\n {/* Inline SmartBrew SVG logo */}\\n <svg\\n xmlns=\\"http://www.w3.org/2000/svg\\"\\n viewBox=\\"0 0 240 100\\"\\n width={120}\\n height={50}\\n >\\n {/* Coffee Cup */}\\n <path\\n d=\\"M65 20 H115 V70 Q115 85 100 85 H80 Q65 85 65 70 Z\\"\\n fill=\\"#8B4513\\"\\n />\\n\\n {/* Cup Handle */}\\n <path\\n d=\\"M115 30 Q130 30 130 45 Q130 60 115 60\\"\\n fill=\\"none\\"\\n stroke=\\"#8B4513\\"\\n strokeWidth=\\"6\\"\\n />\\n\\n {/* Steam */}\\n <path\\n d=\\"M80 15 Q85 5 90 15 Q95 5 100 15\\"\\n fill=\\"none\\"\\n stroke=\\"#8B4513\\"\\n strokeWidth=\\"3\\"\\n strokeLinecap=\\"round\\"\\n />\\n\\n {/* S */}\\n <path\\n d=\\"M140 35 Q150 30 160 35 Q170 40 160 50 Q150 60 160 65 Q170 70 180 65\\"\\n fill=\\"none\\"\\n stroke=\\"#8B4513\\"\\n strokeWidth=\\"6\\"\\n strokeLinecap=\\"round\\"\\n />\\n\\n {/* Text */}\\n <text\\n x=\\"120\\"\\n y=\\"95\\"\\n fontFamily=\\"Arial\\"\\n fontSize=\\"14\\"\\n fontWeight=\\"bold\\"\\n fill=\\"#8B4513\\"\\n textAnchor=\\"middle\\"\\n >\\n SMARTBREW\\n </text>\\n </svg>\\n </a>\\n <button\\n className=\\"text-gray-500 outline-none sm:hidden\\"\\n onClick={() => setState(!state)}\\n >\\n {state ? (\\n <svg\\n xmlns=\\"http://www.w3.org/2000/svg\\"\\n className=\\"h-6 w-6\\"\\n fill=\\"none\\"\\n viewBox=\\"0 0 24 24\\"\\n stroke=\\"currentColor\\"\\n >\\n <path\\n strokeLinecap=\\"round\\"\\n strokeLinejoin=\\"round\\"\\n strokeWidth={2}\\n d=\\"M6 18L18 6M6 6l12 12\\"\\n />\\n </svg>\\n ) : (\\n <svg\\n xmlns=\\"http://www.w3.org/2000/svg\\"\\n className=\\"h-6 w-6\\"\\n fill=\\"none\\"\\n viewBox=\\"0 0 24 24\\"\\n stroke=\\"currentColor\\"\\n >\\n <path\\n strokeLinecap=\\"round\\"\\n strokeLinejoin=\\"round\\"\\n strokeWidth={2}\\n d=\\"M4 6h16M4 12h16M4 18h16\\"\\n />\\n </svg>\\n )}\\n </button>\\n </div>\\n <ul\\n className={`bg-white shadow-md rounded-md p-4 flex-1 mt-12 absolute z-20 top-8 right-4 w-64 border sm:shadow-none sm:block sm:border-0 sm:mt-0 sm:static sm:w-auto ${\\n state ? \\"\\" : \\"hidden\\"\\n }`}\\n >\\n <div className=\\"order-1 justify-end items-center space-y-5 sm:flex sm:space-x-6 sm:space-y-0\\">\\n {navigation.map((item, idx) => (\\n <li className=\\"text-gray-500 hover:text-amber-800\\" key={idx}>\\n <a href={item.path}>{item.title}</a>\\n </li>\\n ))}\\n </div>\\n </ul>\\n </nav>\\n </header>\\n {/* Changed blue section to amber-800 (coffee brown) */}\\n <section className=\\"mt-24 mx-auto max-w-screen-xl pb-4 px-4 items-center lg:flex md:px-8\\">\\n <div className=\\"space-y-4 flex-1 sm:text-center lg:text-left\\">\\n <h1 className=\\"text-gray-800 font-bold text-4xl xl:text-5xl\\">\\n Mobile Ordering Made\\n <span className=\\"text-amber-800\\"> Simple</span>\\n </h1>\\n <p className=\\"text-gray-500 max-w-xl leading-relaxed sm:mx-auto lg:ml-0\\">\\n Enjoy your favorite coffee without the wait. Our new app lets you\\n order ahead and skip the line, saving you valuable time during your\\n busy day.\\n </p>\\n <div>\\n <p className=\\"text-gray-800 py-3\\">\\n Download our app and get your first coffee free\\n </p>\\n <form className=\\"items-center space-y-3 sm:justify-center sm:space-x-3 sm:space-y-0 sm:flex lg:justify-start\\">\\n <input\\n type=\\"text\\"\\n placeholder=\\"Enter your email\\"\\n className=\\"text-gray-500 border outline-none p-3 rounded-md w-full sm:w-72\\"\\n />\\n <button className=\\"outline-none bg-amber-800 text-white text-center px-4 py-3 rounded-md shadow w-full ring-offset-2 ring-amber-800 focus:ring-2 sm:w-auto\\">\\n Download Now\\n </button>\\n </form>\\n </div>\\n </div>\\n <div className=\\"flex-1 text-center mt-4 lg:mt-0 lg:ml-3\\">\\n {/* Coffee roastery SVG illustration */}\\n <svg\\n xmlns=\\"http://www.w3.org/2000/svg\\"\\n viewBox=\\"0 0 800 500\\"\\n className=\\"w-full mx-auto sm:w-10/12 lg:w-full\\"\\n >\\n {/* Background */}\\n <rect width=\\"800\\" height=\\"500\\" fill=\\"#f9f3e5\\" />\\n\\n {/* Wooden floor */}\\n <rect y=\\"400\\" width=\\"800\\" height=\\"100\\" fill=\\"#8b5a2b\\" />\\n <rect y=\\"410\\" width=\\"800\\" height=\\"10\\" fill=\\"#74461f\\" />\\n <rect y=\\"440\\" width=\\"800\\" height=\\"10\\" fill=\\"#74461f\\" />\\n <rect y=\\"470\\" width=\\"800\\" height=\\"10\\" fill=\\"#74461f\\" />\\n\\n {/* Wall elements */}\\n <rect x=\\"0\\" y=\\"100\\" width=\\"800\\" height=\\"300\\" fill=\\"#e6d5b8\\" />\\n <rect x=\\"50\\" y=\\"150\\" width=\\"200\\" height=\\"150\\" fill=\\"#d4b995\\" />\\n <rect x=\\"550\\" y=\\"150\\" width=\\"200\\" height=\\"150\\" fill=\\"#d4b995\\" />\\n\\n {/* Large coffee roaster */}\\n <ellipse cx=\\"400\\" cy=\\"380\\" rx=\\"120\\" ry=\\"30\\" fill=\\"#555\\" />\\n <rect\\n x=\\"340\\"\\n y=\\"200\\"\\n width=\\"120\\"\\n height=\\"180\\"\\n rx=\\"10\\"\\n fill=\\"#8B4513\\"\\n />\\n <rect\\n x=\\"330\\"\\n y=\\"190\\"\\n width=\\"140\\"\\n height=\\"20\\"\\n rx=\\"5\\"\\n fill=\\"#6b3811\\"\\n />\\n <rect x=\\"370\\" y=\\"160\\" width=\\"60\\" height=\\"30\\" fill=\\"#6b3811\\" />\\n <rect x=\\"390\\" y=\\"130\\" width=\\"20\\" height=\\"30\\" fill=\\"#6b3811\\" />\\n <ellipse cx=\\"400\\" cy=\\"130\\" rx=\\"20\\" ry=\\"10\\" fill=\\"#555\\" />\\n\\n {/* Roaster door and details */}\\n <ellipse cx=\\"400\\" cy=\\"250\\" rx=\\"40\\" ry=\\"40\\" fill=\\"#333\\" />\\n <ellipse cx=\\"400\\" cy=\\"250\\" rx=\\"35\\" ry=\\"35\\" fill=\\"#222\\" />\\n <ellipse cx=\\"400\\" cy=\\"250\\" rx=\\"20\\" ry=\\"20\\" fill=\\"#111\\" />\\n\\n {/* Temperature gauge */}\\n <circle\\n cx=\\"420\\"\\n cy=\\"320\\"\\n r=\\"15\\"\\n fill=\\"#ddd\\"\\n stroke=\\"#333\\"\\n strokeWidth=\\"3\\"\\n />\\n <line\\n x1=\\"420\\"\\n y1=\\"320\\"\\n x2=\\"430\\"\\n y2=\\"315\\"\\n stroke=\\"#f00\\"\\n strokeWidth=\\"2\\"\\n />\\n\\n {/* Coffee beans in baskets */}\\n <ellipse cx=\\"200\\" cy=\\"400\\" rx=\\"70\\" ry=\\"20\\" fill=\\"#6b3811\\" />\\n <ellipse cx=\\"200\\" cy=\\"390\\" rx=\\"65\\" ry=\\"15\\" fill=\\"#8B4513\\" />\\n\\n <ellipse cx=\\"600\\" cy=\\"400\\" rx=\\"70\\" ry=\\"20\\" fill=\\"#6b3811\\" />\\n <ellipse cx=\\"600\\" cy=\\"390\\" rx=\\"65\\" ry=\\"15\\" fill=\\"#8B4513\\" />\\n\\n {/* Scattered coffee beans */}\\n <circle cx=\\"150\\" cy=\\"380\\" r=\\"5\\" fill=\\"#5e2c04\\" />\\n <circle cx=\\"170\\" cy=\\"385\\" r=\\"5\\" fill=\\"#5e2c04\\" />\\n <circle cx=\\"190\\" cy=\\"375\\" r=\\"5\\" fill=\\"#5e2c04\\" />\\n <circle cx=\\"210\\" cy=\\"385\\" r=\\"5\\" fill=\\"#5e2c04\\" />\\n <circle cx=\\"230\\" cy=\\"380\\" r=\\"5\\" fill=\\"#5e2c04\\" />\\n\\n <circle cx=\\"550\\" cy=\\"380\\" r=\\"5\\" fill=\\"#5e2c04\\" />\\n <circle cx=\\"570\\" cy=\\"385\\" r=\\"5\\" fill=\\"#5e2c04\\" />\\n <circle cx=\\"590\\" cy=\\"375\\" r=\\"5\\" fill=\\"#5e2c04\\" />\\n <circle cx=\\"610\\" cy=\\"385\\" r=\\"5\\" fill=\\"#5e2c04\\" />\\n <circle cx=\\"630\\" cy=\\"380\\" r=\\"5\\" fill=\\"#5e2c04\\" />\\n\\n {/* Steam from roaster */}\\n <path\\n d=\\"M390 130 Q380 110 390 100 Q400 90 410 100 Q420 110 410 130\\"\\n fill=\\"#f9f3e5\\"\\n fillOpacity=\\"0.7\\"\\n />\\n\\n {/* Barista silhouette */}\\n <ellipse cx=\\"520\\" cy=\\"300\\" rx=\\"30\\" ry=\\"80\\" fill=\\"#333\\" />\\n <circle cx=\\"520\\" cy=\\"210\\" r=\\"20\\" fill=\\"#333\\" />\\n\\n {/* Burlap sacks of coffee */}\\n <path d=\\"M80 350 L130 350 L120 400 L90 400 Z\\" fill=\\"#b89162\\" />\\n <path d=\\"M85 355 L125 355 L117 395 L93 395 Z\\" fill=\\"#aa8657\\" />\\n <text\\n x=\\"105\\"\\n y=\\"380\\"\\n fontFamily=\\"Arial\\"\\n fontSize=\\"12\\"\\n textAnchor=\\"middle\\"\\n fill=\\"#5e2c04\\"\\n >\\n COFFEE\\n </text>\\n\\n <path d=\\"M700 350 L750 350 L740 400 L710 400 Z\\" fill=\\"#b89162\\" />\\n <path d=\\"M705 355 L745 355 L737 395 L713 395 Z\\" fill=\\"#aa8657\\" />\\n <text\\n x=\\"725\\"\\n y=\\"380\\"\\n fontFamily=\\"Arial\\"\\n fontSize=\\"12\\"\\n textAnchor=\\"middle\\"\\n fill=\\"#5e2c04\\"\\n >\\n COFFEE\\n </text>\\n\\n {/* Ambient lighting effect */}\\n <ellipse\\n cx=\\"400\\"\\n cy=\\"250\\"\\n rx=\\"300\\"\\n ry=\\"150\\"\\n fill=\\"#f9b572\\"\\n fillOpacity=\\"0.1\\"\\n />\\n </svg>\\n </div>\\n </section>\\n </>\\n );\\n};\\n\\nexport default Hero;\\n
I’ve hosted it on GitHub because it looks rather lengthy.
\\nWhile it looks like a lot of code, we didn’t write much at all! We used the template given by Float UI, changed the color, added the images, and tweaked the text. It didn’t take more than a couple minutes to build. Now, that is some serious speed.
\\nYou’ll need to know Tailwind classes. At least that was the case in the past; now, you can just use AI to get help.
\\nPro tip: I’d suggest that you do not ask the AI to give you the whole code or component, as it will not give you what you have in mind in most cases, and you’ll spend more time debugging.
\\nI typically ask the AI the smallest, laser-focused questions, like, “what is the class that changes the color in Tailwind CSS?” or, “how can I add margin in Tailwind CSS?”. These divided questions will give you what you want, make you faster, and stop you from losing so much time in debugging.
\\nNow, if you import this Hero section to App.jsx (assuming you already have, and commented out the non-existing ones), npm run dev
command will present you with this simple yet elegant Hero section.
Now that we understand how we will be working, we can continue adding the other sections. This process will make us so fast that a whole landing page like this won’t take us more than an hour or so.
\\nFor the features section, we go to the Feature Sections part of the Float UI and choose Feature Sections with cards shown here:
We will again make some small changes, and it will look like this:
You see, only the colors and text have been changed. It is also responsive by default, how cool is this?:
import React from \\"react\\";\\n\\nconst Features = () => {\\n const features = [\\n {\\n icon: (\\n <svg\\n xmlns=\\"http://www.w3.org/2000/svg\\"\\n fill=\\"none\\"\\n viewBox=\\"0 0 24 24\\"\\n strokeWidth={1.5}\\n stroke=\\"currentColor\\"\\n className=\\"w-6 h-6\\"\\n >\\n <path\\n strokeLinecap=\\"round\\"\\n strokeLinejoin=\\"round\\"\\n d=\\"M3.75 13.5l10.5-11.25L12 10.5h8.25L9.75 21.75 12 13.5H3.75z\\"\\n />\\n </svg>\\n ),\\n title: \\"Fast Ordering\\",\\n desc: \\"Place your order in seconds with our intuitive app interface. Customize your drink exactly how you like it.\\",\\n },\\n {\\n icon: (\\n <svg\\n xmlns=\\"http://www.w3.org/2000/svg\\"\\n fill=\\"none\\"\\n viewBox=\\"0 0 24 24\\"\\n strokeWidth={1.5}\\n stroke=\\"currentColor\\"\\n className=\\"w-6 h-6\\"\\n >\\n <path\\n strokeLinecap=\\"round\\"\\n strokeLinejoin=\\"round\\"\\n d=\\"M3 13.125C3 12.504 3.504 12 4.125 12h2.25c.621 0 1.125.504 1.125 1.125v6.75C7.5 20.496 6.996 21 6.375 21h-2.25A1.125 1.125 0 013 19.875v-6.75zM9.75 8.625c0-.621.504-1.125 1.125-1.125h2.25c.621 0 1.125.504 1.125 1.125v11.25c0 .621-.504 1.125-1.125 1.125h-2.25a1.125 1.125 0 01-1.125-1.125V8.625zM16.5 4.125c0-.621.504-1.125 1.125-1.125h2.25C20.496 3 21 3.504 21 4.125v15.75c0 .621-.504 1.125-1.125 1.125h-2.25a1.125 1.125 0 01-1.125-1.125V4.125z\\"\\n />\\n </svg>\\n ),\\n title: \\"Loyalty Rewards\\",\\n desc: \\"Earn points with every purchase and redeem them for free drinks, pastries, and exclusive SmartBrew merchandise.\\",\\n },\\n {\\n icon: (\\n <svg\\n xmlns=\\"http://www.w3.org/2000/svg\\"\\n fill=\\"none\\"\\n viewBox=\\"0 0 24 24\\"\\n strokeWidth={1.5}\\n stroke=\\"currentColor\\"\\n className=\\"w-6 h-6\\"\\n >\\n <path\\n strokeLinecap=\\"round\\"\\n strokeLinejoin=\\"round\\"\\n d=\\"M16.5 10.5V6.75a4.5 4.5 0 10-9 0v3.75m-.75 11.25h10.5a2.25 2.25 0 002.25-2.25v-6.75a2.25 2.25 0 00-2.25-2.25H6.75a2.25 2.25 0 00-2.25 2.25v6.75a2.25 2.25 0 002.25 2.25z\\"\\n />\\n </svg>\\n ),\\n title: \\"Customized Favorites\\",\\n desc: \\"Save your favorite drinks and easily reorder them with just one tap on your next visit.\\",\\n },\\n {\\n icon: (\\n <svg\\n xmlns=\\"http://www.w3.org/2000/svg\\"\\n fill=\\"none\\"\\n viewBox=\\"0 0 24 24\\"\\n strokeWidth={1.5}\\n stroke=\\"currentColor\\"\\n className=\\"w-6 h-6\\"\\n >\\n <path\\n strokeLinecap=\\"round\\"\\n strokeLinejoin=\\"round\\"\\n d=\\"M9 12.75L11.25 15 15 9.75m-3-7.036A11.959 11.959 0 013.598 6 11.99 11.99 0 003 9.749c0 5.592 3.824 10.29 9 11.623 5.176-1.332 9-6.03 9-11.622 0-1.31-.21-2.571-.598-3.751h-.152c-3.196 0-6.1-1.248-8.25-3.285z\\"\\n />\\n </svg>\\n ),\\n title: \\"Mobile Payment\\",\\n desc: \\"Securely store payment methods and check out faster than ever before.\\",\\n },\\n {\\n icon: (\\n <svg\\n xmlns=\\"http://www.w3.org/2000/svg\\"\\n fill=\\"none\\"\\n viewBox=\\"0 0 24 24\\"\\n strokeWidth={1.5}\\n stroke=\\"currentColor\\"\\n className=\\"w-6 h-6\\"\\n >\\n <path\\n strokeLinecap=\\"round\\"\\n strokeLinejoin=\\"round\\"\\n d=\\"M6.429 9.75L2.25 12l4.179 2.25m0-4.5l5.571 3 5.571-3m-11.142 0L2.25 7.5 12 2.25l9.75 5.25-4.179 2.25m0 0L21.75 12l-4.179 2.25m0 0l4.179 2.25L12 21.75 2.25 16.5l4.179-2.25m11.142 0l-5.571 3-5.571-3\\"\\n />\\n </svg>\\n ),\\n title: \\"Store Locator\\",\\n desc: \\"Find the nearest SmartBrew location and check current wait times before you arrive.\\",\\n },\\n {\\n icon: (\\n <svg\\n xmlns=\\"http://www.w3.org/2000/svg\\"\\n fill=\\"none\\"\\n viewBox=\\"0 0 24 24\\"\\n strokeWidth={1.5}\\n stroke=\\"currentColor\\"\\n className=\\"w-6 h-6\\"\\n >\\n <path\\n strokeLinecap=\\"round\\"\\n strokeLinejoin=\\"round\\"\\n d=\\"M9.813 15.904L9 18.75l-.813-2.846a4.5 4.5 0 00-3.09-3.09L2.25 12l2.846-.813a4.5 4.5 0 003.09-3.09L9 5.25l.813 2.846a4.5 4.5 0 003.09 3.09L15.75 12l-2.846.813a4.5 4.5 0 00-3.09 3.09z\\"\\n />\\n </svg>\\n ),\\n title: \\"Order History\\",\\n desc: \\"Easily reorder your favorite drinks from your purchase history with just one tap.\\",\\n },\\n ];\\n\\n return (\\n <section className=\\"py-14\\">\\n <div className=\\"max-w-screen-xl mx-auto px-4 text-gray-600 md:px-8\\">\\n <div className=\\"relative max-w-2xl mx-auto sm:text-center\\">\\n <div className=\\"relative z-10\\">\\n <h3 className=\\"text-gray-800 text-3xl font-semibold sm:text-4xl\\">\\n Skip the line and order ahead with our new mobile app\\n </h3>\\n <p className=\\"mt-3\\">\\n Perfect for busy mornings or your afternoon coffee break.\\n </p>\\n </div>\\n <div\\n className=\\"absolute inset-0 max-w-xs mx-auto h-44 blur-[118px]\\"\\n style={{\\n background:\\n \\"linear-gradient(152.92deg, rgba(192, 132, 252, 0.2) 4.54%, rgba(232, 121, 249, 0.26) 34.2%, rgba(192, 132, 252, 0.1) 77.55%)\\",\\n }}\\n ></div>\\n </div>\\n <div className=\\"relative mt-12\\">\\n <ul className=\\"grid gap-8 sm:grid-cols-2 lg:grid-cols-3\\">\\n {features.map((item, idx) => (\\n <li\\n key={idx}\\n className=\\"bg-white space-y-3 p-4 border rounded-lg\\"\\n >\\n <div className=\\"text-amber-800 pb-3\\">{item.icon}</div>\\n <h4 className=\\"text-lg text-gray-800 font-semibold\\">\\n {item.title}\\n </h4>\\n <p>{item.desc}</p>\\n </li>\\n ))}\\n </ul>\\n </div>\\n </div>\\n </section>\\n );\\n};\\n\\nexport default Features;\\n
Like the Hero section, we import this section to our App.jsx
and use it.
Every good landing page needs to have a CTA section which beckons the user. We will follow our previous steps and choose a template from the CTA Sections part of the Float UI website. I chose CTA with Blue background and made some small changes to it.
\\nThe original:
Our version:
The code can be found here:
\\nimport React from \\"react\\";\\n\\nconst CTA = () => {\\n return (\\n <div>\\n {\\" \\"}\\n <section className=\\"py-28 relative bg-amber-600\\">\\n <div className=\\"relative z-10 max-w-screen-xl mx-auto px-4 md:text-center md:px-8\\">\\n <div className=\\"max-w-xl md:mx-auto\\">\\n <p className=\\"text-white text-3xl font-semibold sm:text-4xl\\">\\n Brewing a Better Coffee Experience{\\" \\"}\\n </p>\\n <p className=\\"text-amber-100 mt-3\\">\\n We\'ve combined our passion for quality coffee with modern\\n technology to create a seamless experience for our valued\\n customers.\\n </p>\\n </div>\\n <div className=\\"mt-4\\">\\n <a\\n href=\\"javascript:void(0)\\"\\n className=\\"inline-block py-2 px-4 text-gray-800 font-medium bg-white duration-150 hover:bg-gray-100 active:bg-gray-200 rounded-full\\"\\n >\\n Get started\\n </a>\\n </div>\\n </div>\\n <div\\n className=\\"absolute top-0 w-full h-full\\"\\n style={{\\n background:\\n \\"linear-gradient(268.24deg, rgba(59, 130, 246, 0.76) 50%, rgba(59, 130, 246, 0.545528) 80.61%, rgba(55, 48, 163, 0) 117.35%)\\",\\n }}\\n ></div>\\n </section>\\n </div>\\n );\\n};\\n\\nexport default CTA;\\n\\n
You see, we only change the color and the text. Everything is ready-made.
\\nNow that we’ve understood how we’re working, I will keep it short from now on.
\\nHere’s the code for the Testimonials section:
\\nimport React from \\"react\\";\\n\\nconst Testimonials = () => {\\n return (\\n <div>\\n <section className=\\"py-14\\">\\n <div className=\\"max-w-screen-xl mx-auto px-4 md:px-8\\">\\n <div className=\\"max-w-3xl mx-auto\\">\\n <figure>\\n <blockquote>\\n <p className=\\"text-gray-800 text-xl text-center font-semibold sm:text-2xl\\">\\n \\"SmartBrew has completely changed my morning routine. I order\\n on my way to work and my perfectly crafted latte is waiting\\n when I arrive. No more standing in line!\\"\\n </p>\\n </blockquote>\\n <div className=\\"flex justify-center items-center gap-x-4 mt-6\\">\\n <img\\n src=\\"https://api.uifaces.co/our-content/donated/xZ4wg2Xj.jpg\\"\\n className=\\"w-16 h-16 rounded-full\\"\\n />\\n <div>\\n <span className=\\"block text-gray-800 font-semibold\\">\\n Martin escobar\\n </span>\\n <span className=\\"block text-gray-600 text-sm mt-0.5\\">\\n Daily Customer{\\" \\"}\\n </span>\\n </div>\\n </div>\\n </figure>\\n </div>\\n </div>\\n </section>\\n </div>\\n );\\n};\\n\\nexport default Testimonials;\\n\\n
Here’s the code for FAQ section:
\\nimport React from \\"react\\";\\n\\nconst FAQ = () => {\\n const faqsList = [\\n {\\n q: \\"How do I download the SmartBrew app?\\",\\n a: \\"The SmartBrew app is available for free download on both the Apple App Store and Google Play Store. Simply search \'SmartBrew\' and look for our logo.\\",\\n },\\n {\\n q: \\"Can I customize my drink orders?\\",\\n a: \\"Absolutely! Our app offers all the same customization options available in-store. Adjust milk type, espresso shots, flavors, and more with easy tap controls.\\",\\n },\\n {\\n q: \\"How does the loyalty program work?\\",\\n a: \\"You earn 1 point for every dollar spent through the app. Once you reach 50 points, you\'ll receive a free drink of your choice. Additional rewards unlock at higher point levels.\\",\\n },\\n {\\n q: \\"Is mobile ordering available at all SmartBrew locations?\\",\\n a: \\"Yes, mobile ordering is available at all SmartBrew locations. The app will show you nearby cafes and their current wait times.\\",\\n },\\n {\\n q: \\"How far in advance can I place an order?\\",\\n a: \\"You can place orders up to 24 hours in advance. Perfect for scheduling your morning coffee pickup or organizing a coffee run for your office.\\",\\n },\\n {\\n q: \\"What payment methods are accepted in the app?\\",\\n a: \\"We accept all major credit and debit cards, SmartBrew gift cards, Apple Pay, and Google Pay for secure, contactless payment.\\",\\n },\\n ];\\n\\n return (\\n <section className=\\"leading-relaxed max-w-screen-xl mt-12 mx-auto px-4 md:px-8\\">\\n <div className=\\"space-y-3 text-center\\">\\n <h1 className=\\"text-3xl text-gray-800 font-semibold\\">\\n Frequently Asked Questions\\n </h1>\\n <p className=\\"text-gray-600 max-w-lg mx-auto text-lg\\">\\n Everything you need to know about our new mobile ordering app.\\n </p>\\n </div>\\n <div className=\\"mt-14 gap-4 sm:grid sm:grid-cols-2 lg:grid-cols-3\\">\\n {faqsList.map((item, idx) => (\\n <div className=\\"space-y-3 mt-5\\" key={idx}>\\n <h4 className=\\"text-xl text-gray-700 font-medium\\">{item.q}</h4>\\n <p className=\\"text-gray-500\\">{item.a}</p>\\n </div>\\n ))}\\n </div>\\n </section>\\n );\\n};\\n\\nexport default FAQ;\\n
import React from \\"react\\";\\n\\nconst Contact = () => {\\n const contactMethods = [\\n {\\n icon: (\\n <svg\\n xmlns=\\"http://www.w3.org/2000/svg\\"\\n fill=\\"none\\"\\n viewBox=\\"0 0 24 24\\"\\n strokeWidth={1.5}\\n stroke=\\"currentColor\\"\\n className=\\"w-6 h-6\\"\\n >\\n <path\\n strokeLinecap=\\"round\\"\\n strokeLinejoin=\\"round\\"\\n d=\\"M12 21a9.004 9.004 0 008.716-6.747M12 21a9.004 9.004 0 01-8.716-6.747M12 21c2.485 0 4.5-4.03 4.5-9S14.485 3 12 3m0 18c-2.485 0-4.5-4.03-4.5-9S9.515 3 12 3m0 0a8.997 8.997 0 017.843 4.582M12 3a8.997 8.997 0 00-7.843 4.582m15.686 0A11.953 11.953 0 0112 10.5c-2.998 0-5.74-1.1-7.843-2.918m15.686 0A8.959 8.959 0 0121 12c0 .778-.099 1.533-.284 2.253m0 0A17.919 17.919 0 0112 16.5c-3.162 0-6.133-.815-8.716-2.247m0 0A9.015 9.015 0 013 12c0-1.605.42-3.113 1.157-4.418\\"\\n />\\n </svg>\\n ),\\n title: \\"Join our community\\",\\n desc: \\"Stay updated on seasonal specials and coffee events near you.\\",\\n link: {\\n name: \\"Join our Discord\\",\\n href: \\"javascript:void(0)\\",\\n },\\n },\\n {\\n icon: (\\n <svg\\n className=\\"w-6 h-6\\"\\n viewBox=\\"0 0 48 48\\"\\n fill=\\"none\\"\\n xmlns=\\"http://www.w3.org/2000/svg\\"\\n >\\n <g clip-path=\\"url(#clip0_17_80)\\">\\n <path\\n d=\\"M15.1003 43.5C33.2091 43.5 43.1166 28.4935 43.1166 15.4838C43.1166 15.0619 43.1072 14.6307 43.0884 14.2088C45.0158 12.815 46.679 11.0886 48 9.11066C46.205 9.90926 44.2993 10.4308 42.3478 10.6575C44.4026 9.42588 45.9411 7.491 46.6781 5.21159C44.7451 6.35718 42.6312 7.16528 40.4269 7.60128C38.9417 6.02318 36.978 4.97829 34.8394 4.62816C32.7008 4.27803 30.5064 4.64216 28.5955 5.66425C26.6846 6.68635 25.1636 8.30947 24.2677 10.2827C23.3718 12.2559 23.1509 14.4693 23.6391 16.5807C19.725 16.3842 15.8959 15.3675 12.4 13.5963C8.90405 11.825 5.81939 9.33893 3.34594 6.29909C2.0888 8.46655 1.70411 11.0314 2.27006 13.4722C2.83601 15.9131 4.31013 18.047 6.39281 19.44C4.82926 19.3904 3.29995 18.9694 1.93125 18.2119V18.3338C1.92985 20.6084 2.7162 22.8132 4.15662 24.5736C5.59704 26.334 7.60265 27.5412 9.8325 27.99C8.38411 28.3863 6.86396 28.4441 5.38969 28.1588C6.01891 30.1149 7.24315 31.8258 8.89154 33.0527C10.5399 34.2796 12.5302 34.9613 14.5847 35.0025C11.0968 37.7423 6.78835 39.2283 2.35313 39.2213C1.56657 39.2201 0.780798 39.1719 0 39.0769C4.50571 41.9676 9.74706 43.5028 15.1003 43.5Z\\"\\n fill=\\"currentColor\\"\\n />\\n </g>\\n <defs>\\n <clipPath id=\\"clip0_17_80\\">\\n <rect width=\\"48\\" height=\\"48\\" fill=\\"white\\" />\\n </clipPath>\\n </defs>\\n </svg>\\n ),\\n\\n title: \\"Follow us on Twitter\\",\\n desc: \\"Share your SmartBrew experience and connect with other coffee lovers.\\",\\n link: {\\n name: \\"Send us DMs\\",\\n href: \\"javascript:void(0)\\",\\n },\\n },\\n ];\\n return (\\n <section className=\\"py-14\\">\\n <div className=\\"max-w-screen-xl mx-auto px-4 text-gray-600 gap-12 md:px-8 lg:flex\\">\\n <div className=\\"max-w-md\\">\\n <h3 className=\\"text-gray-800 text-3xl font-semibold sm:text-4xl\\">\\n Let\'s connect\\n </h3>\\n <p className=\\"mt-3\\">\\n We love hearing from our customers! Reach out with questions,\\n feedback, or just to say hello.\\n </p>\\n </div>\\n <div>\\n <ul className=\\"mt-12 gap-y-6 gap-x-12 items-center md:flex lg:gap-x-0 lg:mt-0\\">\\n {contactMethods.map((item, idx) => (\\n <li\\n key={idx}\\n className=\\"space-y-3 border-t py-6 md:max-w-sm md:py-0 md:border-t-0 lg:border-l lg:px-12 lg:max-w-none\\"\\n >\\n <div className=\\"w-12 h-12 rounded-full border flex items-center justify-center text-gray-700\\">\\n {item.icon}\\n </div>\\n <h4 className=\\"text-gray-800 text-lg font-medium xl:text-xl\\">\\n {item.title}\\n </h4>\\n <p>{item.desc}</p>\\n <a\\n href={item.link.href}\\n className=\\"flex items-center gap-1 text-sm text-amber-800 duration-150 hover:text-indigo-400 font-medium\\"\\n >\\n {item.link.name}\\n <svg\\n xmlns=\\"http://www.w3.org/2000/svg\\"\\n viewBox=\\"0 0 20 20\\"\\n fill=\\"currentColor\\"\\n className=\\"w-5 h-5\\"\\n >\\n <path\\n fillRule=\\"evenodd\\"\\n d=\\"M5 10a.75.75 0 01.75-.75h6.638L10.23 7.29a.75.75 0 111.04-1.08l3.5 3.25a.75.75 0 010 1.08l-3.5 3.25a.75.75 0 11-1.04-1.08l2.158-1.96H5.75A.75.75 0 015 10z\\"\\n clipRule=\\"evenodd\\"\\n />\\n </svg>\\n </a>\\n </li>\\n ))}\\n </ul>\\n </div>\\n </div>\\n </section>\\n );\\n};\\n\\nexport default Contact;\\n const features = [\\n {\\n icon: (\\n <svg\\n xmlns=\\"http://www.w3.org/2000/svg\\"\\n fill=\\"none\\"\\n viewBox=\\"0 0 24 24\\"\\n strokeWidth={1.5}\\n stroke=\\"currentColor\\"\\n className=\\"w-6 h-6\\"\\n >\\n <path\\n strokeLinecap=\\"round\\"\\n strokeLinejoin=\\"round\\"\\n d=\\"M3.75 13.5l10.5-11.25L12 10.5h8.25L9.75 21.75 12 13.5H3.75z\\"\\n />\\n </svg>\\n ),\\n title: \\"Fast Ordering\\",\\n desc: \\"Place your order in seconds with our intuitive app interface. Customize your drink exactly how you like it.\\",\\n },\\n {\\n icon: (\\n <svg\\n xmlns=\\"http://www.w3.org/2000/svg\\"\\n fill=\\"none\\"\\n viewBox=\\"0 0 24 24\\"\\n strokeWidth={1.5}\\n stroke=\\"currentColor\\"\\n className=\\"w-6 h-6\\"\\n >\\n <path\\n strokeLinecap=\\"round\\"\\n strokeLinejoin=\\"round\\"\\n d=\\"M3 13.125C3 12.504 3.504 12 4.125 12h2.25c.621 0 1.125.504 1.125 1.125v6.75C7.5 20.496 6.996 21 6.375 21h-2.25A1.125 1.125 0 013 19.875v-6.75zM9.75 8.625c0-.621.504-1.125 1.125-1.125h2.25c.621 0 1.125.504 1.125 1.125v11.25c0 .621-.504 1.125-1.125 1.125h-2.25a1.125 1.125 0 01-1.125-1.125V8.625zM16.5 4.125c0-.621.504-1.125 1.125-1.125h2.25C20.496 3 21 3.504 21 4.125v15.75c0 .621-.504 1.125-1.125 1.125h-2.25a1.125 1.125 0 01-1.125-1.125V4.125z\\"\\n />\\n </svg>\\n ),\\n title: \\"Loyalty Rewards\\",\\n desc: \\"Earn points with every purchase and redeem them for free drinks, pastries, and exclusive SmartBrew merchandise.\\",\\n },\\n {\\n icon: (\\n <svg\\n xmlns=\\"http://www.w3.org/2000/svg\\"\\n fill=\\"none\\"\\n viewBox=\\"0 0 24 24\\"\\n strokeWidth={1.5}\\n stroke=\\"currentColor\\"\\n className=\\"w-6 h-6\\"\\n >\\n <path\\n strokeLinecap=\\"round\\"\\n strokeLinejoin=\\"round\\"\\n d=\\"M16.5 10.5V6.75a4.5 4.5 0 10-9 0v3.75m-.75 11.25h10.5a2.25 2.25 0 002.25-2.25v-6.75a2.25 2.25 0 00-2.25-2.25H6.75a2.25 2.25 0 00-2.25 2.25v6.75a2.25 2.25 0 002.25 2.25z\\"\\n />\\n </svg>\\n ),\\n title: \\"Customized Favorites\\",\\n desc: \\"Save your favorite drinks and easily reorder them with just one tap on your next visit.\\",\\n },\\n {\\n icon: (\\n <svg\\n xmlns=\\"http://www.w3.org/2000/svg\\"\\n fill=\\"none\\"\\n viewBox=\\"0 0 24 24\\"\\n strokeWidth={1.5}\\n stroke=\\"currentColor\\"\\n className=\\"w-6 h-6\\"\\n >\\n <path\\n strokeLinecap=\\"round\\"\\n strokeLinejoin=\\"round\\"\\n d=\\"M9 12.75L11.25 15 15 9.75m-3-7.036A11.959 11.959 0 013.598 6 11.99 11.99 0 003 9.749c0 5.592 3.824 10.29 9 11.623 5.176-1.332 9-6.03 9-11.622 0-1.31-.21-2.571-.598-3.751h-.152c-3.196 0-6.1-1.248-8.25-3.285z\\"\\n />\\n </svg>\\n ),\\n title: \\"Mobile Payment\\",\\n desc: \\"Securely store payment methods and check out faster than ever before.\\",\\n },\\n {\\n icon: (\\n <svg\\n xmlns=\\"http://www.w3.org/2000/svg\\"\\n fill=\\"none\\"\\n viewBox=\\"0 0 24 24\\"\\n strokeWidth={1.5}\\n stroke=\\"currentColor\\"\\n className=\\"w-6 h-6\\"\\n >\\n <path\\n strokeLinecap=\\"round\\"\\n strokeLinejoin=\\"round\\"\\n d=\\"M6.429 9.75L2.25 12l4.179 2.25m0-4.5l5.571 3 5.571-3m-11.142 0L2.25 7.5 12 2.25l9.75 5.25-4.179 2.25m0 0L21.75 12l-4.179 2.25m0 0l4.179 2.25L12 21.75 2.25 16.5l4.179-2.25m11.142 0l-5.571 3-5.571-3\\"\\n />\\n </svg>\\n ),\\n title: \\"Store Locator\\",\\n desc: \\"Find the nearest SmartBrew location and check current wait times before you arrive.\\",\\n },\\n {\\n icon: (\\n <svg\\n xmlns=\\"http://www.w3.org/2000/svg\\"\\n fill=\\"none\\"\\n viewBox=\\"0 0 24 24\\"\\n strokeWidth={1.5}\\n stroke=\\"currentColor\\"\\n className=\\"w-6 h-6\\"\\n >\\n <path\\n strokeLinecap=\\"round\\"\\n strokeLinejoin=\\"round\\"\\n d=\\"M9.813 15.904L9 18.75l-.813-2.846a4.5 4.5 0 00-3.09-3.09L2.25 12l2.846-.813a4.5 4.5 0 003.09-3.09L9 5.25l.813 2.846a4.5 4.5 0 003.09 3.09L15.75 12l-2.846.813a4.5 4.5 0 00-3.09 3.09z\\"\\n />\\n </svg>\\n ),\\n title: \\"Order History\\",\\n desc: \\"Easily reorder your favorite drinks from your purchase history with just one tap.\\",\\n },\\n ];\\n\\n return (\\n <section className=\\"py-14\\">\\n <div className=\\"max-w-screen-xl mx-auto px-4 text-gray-600 md:px-8\\">\\n <div className=\\"relative max-w-2xl mx-auto sm:text-center\\">\\n <div className=\\"relative z-10\\">\\n <h3 className=\\"text-gray-800 text-3xl font-semibold sm:text-4xl\\">\\n Skip the line and order ahead with our new mobile app\\n </h3>\\n <p className=\\"mt-3\\">\\n Perfect for busy mornings or your afternoon coffee break.\\n </p>\\n </div>\\n <div\\n className=\\"absolute inset-0 max-w-xs mx-auto h-44 blur-[118px]\\"\\n style={{\\n background:\\n \\"linear-gradient(152.92deg, rgba(192, 132, 252, 0.2) 4.54%, rgba(232, 121, 249, 0.26) 34.2%, rgba(192, 132, 252, 0.1) 77.55%)\\",\\n }}\\n ></div>\\n </div>\\n <div className=\\"relative mt-12\\">\\n <ul className=\\"grid gap-8 sm:grid-cols-2 lg:grid-cols-3\\">\\n {features.map((item, idx) => (\\n <li\\n key={idx}\\n className=\\"bg-white space-y-3 p-4 border rounded-lg\\"\\n >\\n <div className=\\"text-amber-800 pb-3\\">{item.icon}</div>\\n <h4 className=\\"text-lg text-gray-800 font-semibold\\">\\n {item.title}\\n </h4>\\n <p>{item.desc}</p>\\n </li>\\n ))}\\n </ul>\\n </div>\\n </div>\\n </section>\\n );\\n};\\n\\nexport default Features;\\n
import React from \\"react\\";\\n\\nconst Footer = () => {\\n const footerNavs = [\\n {\\n label: \\"Resources\\",\\n items: [\\n {\\n href: \\"javascript:void()\\",\\n name: \\"contact\\",\\n },\\n {\\n href: \\"javascript:void()\\",\\n name: \\"Support\\",\\n },\\n {\\n href: \\"javascript:void()\\",\\n name: \\"Documentation\\",\\n },\\n {\\n href: \\"javascript:void()\\",\\n name: \\"Pricing\\",\\n },\\n ],\\n },\\n {\\n label: \\"About\\",\\n items: [\\n {\\n href: \\"javascript:void()\\",\\n name: \\"Terms\\",\\n },\\n {\\n href: \\"javascript:void()\\",\\n name: \\"License\\",\\n },\\n {\\n href: \\"javascript:void()\\",\\n name: \\"Privacy\\",\\n },\\n {\\n href: \\"javascript:void()\\",\\n name: \\"About US\\",\\n },\\n ],\\n },\\n {\\n label: \\"Explore\\",\\n items: [\\n {\\n href: \\"javascript:void()\\",\\n name: \\"Showcase\\",\\n },\\n {\\n href: \\"javascript:void()\\",\\n name: \\"Roadmap\\",\\n },\\n {\\n href: \\"javascript:void()\\",\\n name: \\"Languages\\",\\n },\\n {\\n href: \\"javascript:void()\\",\\n name: \\"Blog\\",\\n },\\n ],\\n },\\n {\\n label: \\"Company\\",\\n items: [\\n {\\n href: \\"javascript:void()\\",\\n name: \\"Partners\\",\\n },\\n {\\n href: \\"javascript:void()\\",\\n name: \\"Team\\",\\n },\\n {\\n href: \\"javascript:void()\\",\\n name: \\"Careers\\",\\n },\\n ],\\n },\\n ];\\n return (\\n <footer className=\\"pt-10\\">\\n <div className=\\"max-w-screen-xl mx-auto px-4 md:px-8\\">\\n <div className=\\"justify-between items-center gap-12 md:flex\\">\\n <div className=\\"flex-1 max-w-lg\\">\\n <h3 className=\\"text-2xl font-bold\\">\\n Get our beautiful newsletter straight to your inbox.\\n </h3>\\n </div>\\n <div className=\\"flex-1 mt-6 md:mt-0\\">\\n <form\\n onSubmit={(e) => e.preventDefault()}\\n className=\\"flex items-center gap-x-3 md:justify-end\\"\\n >\\n <div className=\\"relative\\">\\n <svg\\n className=\\"w-6 h-6 text-gray-400 absolute left-3 inset-y-0 my-auto\\"\\n xmlns=\\"http://www.w3.org/2000/svg\\"\\n fill=\\"none\\"\\n viewBox=\\"0 0 24 24\\"\\n strokeWidth={1.5}\\n stroke=\\"currentColor\\"\\n >\\n <path\\n strokeLinecap=\\"round\\"\\n strokeLinejoin=\\"round\\"\\n d=\\"M21.75 6.75v10.5a2.25 2.25 0 01-2.25 2.25h-15a2.25 2.25 0 01-2.25-2.25V6.75m19.5 0A2.25 2.25 0 0019.5 4.5h-15a2.25 2.25 0 00-2.25 2.25m19.5 0v.243a2.25 2.25 0 01-1.07 1.916l-7.5 4.615a2.25 2.25 0 01-2.36 0L3.32 8.91a2.25 2.25 0 01-1.07-1.916V6.75\\"\\n />\\n </svg>\\n <input\\n type=\\"email\\"\\n required\\n placeholder=\\"Enter your email\\"\\n className=\\"w-full pl-12 pr-3 py-2 text-gray-500 bg-white outline-none border focus:border-amber-600 shadow-sm rounded-lg\\"\\n />\\n </div>\\n <button className=\\"block w-auto py-3 px-4 font-medium text-sm text-center text-white bg-amber-600 hover:bg-amber-500 active:bg-amber-700 active:shadow-none rounded-lg shadow\\">\\n Subscribe\\n </button>\\n </form>\\n </div>\\n </div>\\n <div className=\\"flex-1 mt-16 space-y-6 justify-between sm:flex md:space-y-0\\">\\n {footerNavs.map((item, idx) => (\\n <ul className=\\"space-y-4 text-gray-600\\" key={idx}>\\n <h4 className=\\"text-gray-800 font-semibold sm:pb-2\\">\\n {item.label}\\n </h4>\\n {item.items.map((el, idx) => (\\n <li key={idx}>\\n <a\\n href={el.href}\\n className=\\"hover:text-gray-800 duration-150\\"\\n >\\n {el.name}\\n </a>\\n </li>\\n ))}\\n </ul>\\n ))}\\n </div>\\n <div className=\\"mt-10 py-10 border-t items-center justify-between sm:flex\\">\\n <p className=\\"text-gray-600\\">\\n © 2025 SmartBrew. All rights reserved.\\n </p>\\n <div className=\\"flex items-center gap-x-6 text-gray-400 mt-6\\">\\n <a href=\\"javascript:void()\\">\\n <svg\\n className=\\"w-6 h-6 hover:text-gray-500 duration-150\\"\\n fill=\\"none\\"\\n viewBox=\\"0 0 48 48\\"\\n >\\n <g clip-path=\\"url(#a)\\">\\n <path\\n fill=\\"currentColor\\"\\n d=\\"M48 24C48 10.745 37.255 0 24 0S0 10.745 0 24c0 11.979 8.776 21.908 20.25 23.708v-16.77h-6.094V24h6.094v-5.288c0-6.014 3.583-9.337 9.065-9.337 2.625 0 5.372.469 5.372.469v5.906h-3.026c-2.981 0-3.911 1.85-3.911 3.75V24h6.656l-1.064 6.938H27.75v16.77C39.224 45.908 48 35.978 48 24z\\"\\n />\\n </g>\\n <defs>\\n <clipPath id=\\"a\\">\\n <path fill=\\"#fff\\" d=\\"M0 0h48v48H0z\\" />\\n </clipPath>\\n </defs>\\n </svg>\\n </a>\\n <a href=\\"javascript:void()\\">\\n <svg\\n className=\\"w-6 h-6 hover:text-gray-500 duration-150\\"\\n fill=\\"none\\"\\n viewBox=\\"0 0 48 48\\"\\n >\\n <g clip-path=\\"url(#clip0_17_80)\\">\\n <path\\n fill=\\"currentColor\\"\\n d=\\"M15.1 43.5c18.11 0 28.017-15.006 28.017-28.016 0-.422-.01-.853-.029-1.275A19.998 19.998 0 0048 9.11c-1.795.798-3.7 1.32-5.652 1.546a9.9 9.9 0 004.33-5.445 19.794 19.794 0 01-6.251 2.39 9.86 9.86 0 00-16.788 8.979A27.97 27.97 0 013.346 6.299 9.859 9.859 0 006.393 19.44a9.86 9.86 0 01-4.462-1.228v.122a9.844 9.844 0 007.901 9.656 9.788 9.788 0 01-4.442.169 9.867 9.867 0 009.195 6.843A19.75 19.75 0 010 39.078 27.937 27.937 0 0015.1 43.5z\\"\\n />\\n </g>\\n <defs>\\n <clipPath id=\\"clip0_17_80\\">\\n <path fill=\\"#fff\\" d=\\"M0 0h48v48H0z\\" />\\n </clipPath>\\n </defs>\\n </svg>\\n </a>\\n <a href=\\"javascript:void()\\">\\n <svg\\n className=\\"w-6 h-6 hover:text-gray-500 duration-150\\"\\n fill=\\"none\\"\\n viewBox=\\"0 0 48 48\\"\\n >\\n <g fill=\\"currentColor\\" clip-path=\\"url(#clip0_910_44)\\">\\n <path\\n fill-rule=\\"evenodd\\"\\n d=\\"M24 1A24.086 24.086 0 008.454 6.693 23.834 23.834 0 00.319 21.044a23.754 23.754 0 003.153 16.172 23.98 23.98 0 0012.938 10.29c1.192.221 1.641-.518 1.641-1.146 0-.628-.024-2.45-.032-4.442-6.676 1.443-8.087-2.817-8.087-2.817-1.089-2.766-2.663-3.493-2.663-3.493-2.178-1.478.163-1.45.163-1.45 2.413.17 3.68 2.461 3.68 2.461 2.138 3.648 5.616 2.593 6.983 1.976.215-1.545.838-2.596 1.526-3.193-5.333-.6-10.937-2.647-10.937-11.791a9.213 9.213 0 012.472-6.406c-.246-.6-1.069-3.026.234-6.322 0 0 2.015-.64 6.602 2.446a22.904 22.904 0 0112.017 0c4.583-3.086 6.594-2.446 6.594-2.446 1.307 3.288.484 5.714.238 6.322a9.194 9.194 0 012.476 6.414c0 9.163-5.615 11.183-10.957 11.772.859.742 1.626 2.193 1.626 4.421 0 3.193-.028 5.762-.028 6.548 0 .636.433 1.38 1.65 1.146a23.98 23.98 0 0012.938-10.291 23.754 23.754 0 003.151-16.175A23.834 23.834 0 0039.56 6.69 24.086 24.086 0 0024.009 1H24z\\"\\n clip-rule=\\"evenodd\\"\\n />\\n <path d=\\"M9.089 35.264c-.052.119-.243.154-.398.071-.155-.083-.27-.237-.214-.36.056-.122.242-.154.397-.07.155.082.274.24.215.359zM10.063 36.343a.4.4 0 01-.493-.11c-.155-.167-.187-.396-.068-.499.12-.102.334-.055.489.11.155.167.19.396.072.499zM11.008 37.714c-.147.103-.397 0-.536-.206a.395.395 0 010-.569c.147-.098.397 0 .537.202.139.202.143.47 0 .573zM12.292 39.042c-.131.146-.397.106-.616-.091-.219-.198-.27-.467-.139-.609.131-.142.397-.102.624.091.226.194.27.466.131.609zM14.092 39.816c-.06.186-.33.269-.6.19-.27-.08-.449-.3-.397-.49.051-.19.326-.277.6-.19.274.087.449.297.397.49zM16.056 39.95c0 .194-.223.36-.509.364-.286.004-.52-.154-.52-.348 0-.193.222-.36.508-.363.286-.004.52.15.52.347zM17.884 39.646c.036.194-.163.395-.45.443-.285.047-.536-.067-.572-.257-.035-.19.171-.395.45-.447.278-.05.536.068.572.261z\\" />\\n </g>\\n <defs>\\n <clipPath id=\\"clip0_910_44\\">\\n <path fill=\\"#fff\\" d=\\"M0 0h48v48H0z\\" />\\n </clipPath>\\n </defs>\\n </svg>\\n </a>\\n <a href=\\"javascript:void()\\">\\n <svg\\n className=\\"w-6 h-6 hover:text-gray-500 duration-150\\"\\n fill=\\"currentColor\\"\\n viewBox=\\"0 0 48 48\\"\\n >\\n <g clip-path=\\"url(#clip0_17_63)\\">\\n <path d=\\"M24 4.322c6.413 0 7.172.028 9.694.14 2.343.104 3.61.497 4.453.825 1.116.432 1.922.957 2.756 1.791.844.844 1.36 1.64 1.79 2.756.329.844.723 2.12.826 4.454.112 2.53.14 3.29.14 9.693 0 6.413-.028 7.172-.14 9.694-.103 2.344-.497 3.61-.825 4.453-.431 1.116-.957 1.922-1.79 2.756-.845.844-1.642 1.36-2.757 1.791-.844.328-2.119.722-4.453.825-2.532.112-3.29.14-9.694.14-6.413 0-7.172-.028-9.694-.14-2.343-.103-3.61-.497-4.453-.825-1.115-.431-1.922-.956-2.756-1.79-.844-.844-1.36-1.641-1.79-2.757-.329-.844-.723-2.119-.826-4.453-.112-2.531-.14-3.29-.14-9.694 0-6.412.028-7.172.14-9.694.103-2.343.497-3.609.825-4.453.431-1.115.957-1.921 1.79-2.756.845-.844 1.642-1.36 2.757-1.79.844-.329 2.119-.722 4.453-.825 2.522-.113 3.281-.141 9.694-.141zM24 0c-6.516 0-7.331.028-9.89.14-2.55.113-4.304.526-5.822 1.116-1.585.619-2.926 1.435-4.257 2.775-1.34 1.332-2.156 2.672-2.775 4.247C.666 9.806.253 11.55.141 14.1.028 16.669 0 17.484 0 24s.028 7.331.14 9.89c.113 2.55.526 4.304 1.116 5.822.619 1.585 1.435 2.925 2.775 4.257a11.732 11.732 0 004.247 2.765c1.528.591 3.272 1.003 5.822 1.116 2.56.112 3.375.14 9.89.14 6.516 0 7.332-.028 9.891-.14 2.55-.113 4.303-.525 5.822-1.116a11.732 11.732 0 004.247-2.765 11.732 11.732 0 002.766-4.247c.59-1.528 1.003-3.272 1.115-5.822.113-2.56.14-3.375.14-9.89 0-6.516-.027-7.332-.14-9.891-.112-2.55-.525-4.303-1.115-5.822-.591-1.594-1.407-2.935-2.747-4.266a11.732 11.732 0 00-4.247-2.765C38.194.675 36.45.262 33.9.15 31.331.028 30.516 0 24 0z\\" />\\n <path d=\\"M24 11.672c-6.806 0-12.328 5.522-12.328 12.328 0 6.806 5.522 12.328 12.328 12.328 6.806 0 12.328-5.522 12.328-12.328 0-6.806-5.522-12.328-12.328-12.328zm0 20.325a7.998 7.998 0 010-15.994 7.998 7.998 0 010 15.994zM39.694 11.184a2.879 2.879 0 11-2.878-2.878 2.885 2.885 0 012.878 2.878z\\" />\\n </g>\\n <defs>\\n <clipPath id=\\"clip0_17_63\\">\\n <path d=\\"M0 0h48v48H0z\\" />\\n </clipPath>\\n </defs>\\n </svg>\\n </a>\\n </div>\\n </div>\\n </div>\\n </footer>\\n );\\n};\\n\\nexport default Footer;\\n
Let’s look at the pros and cons of using Float UI in your workflow.
\\nIn today’s fast-paced development landscape, meeting tight deadlines while delivering professional, responsive websites is no longer a daunting task, thanks to tools like Float UI.
\\nBy leveraging pre-built, customizable components powered by Tailwind CSS, developers can rapidly assemble polished interfaces without compromising quality or maintainability. As demonstrated in building SmartBrew’s business page, Float UI streamlines the process: from Hero sections to FAQs, each component integrates seamlessly into a React project, saving hours of manual coding.
\\nThe result? A fully functional, mobile-friendly website tailored to client specifications — completed in a fraction of the time it would take to build from scratch. While Float UI does require familiarity with Tailwind CSS, its open-source nature and flexibility make it an invaluable asset for developers tackling urgent projects or agencies aiming to scale efficiently.
\\nBy combining Float UI’s template library with strategic customization, developers can focus on unique branding and user experience rather than reinventing the wheel.
\\nThe final deployed app (viewable here) stands as a testament to how modern tools empower teams to deliver exceptional results under pressure.
\\nWhether you’re a freelancer, part of an agency, or an in-house developer, Float UI proves that speed and quality can coexist—no caffeine-fueled coding marathons required.
\\n\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nPicture this: You spend hours trying to perfect your website’s layout and ensure every element is pixel-perfect. It looks flawless in Chrome. But when you open it in Safari, the spacing is off. Then, in Firefox, a key animation doesn’t work at all. Frustrating right?
\\nDifferent browsers don’t always interpret CSS the same way. Chrome, Edge, Safari, and Firefox each use their own rendering engines, which can lead to unexpected differences in layout, spacing, and animations. Some browsers support the latest CSS features, while others take longer to catch up. On top of that, vendor-specific properties often force developers to write extra code just to keep styles consistent. Even reliable techniques like Flexbox and CSS Grid don’t always behave the same across browsers.
\\nThat’s why cross-browser testing matters. Catching issues early saves hours of debugging and ensures users get a good experience and impression, no matter which browser they use. There are powerful tools that can do the heavy lifting so you don’t have to test manually on every browser and device:
\\nIn this article, we’ll explore five free and open source tools that simplify CSS compatibility testing. These tools help identify rendering inconsistencies, debug layout issues, and ensure your styles work smoothly across different browsers.
\\nEach section will cover:
\\nPlaywright is an open source testing framework that automates browser interactions across Chromium (Chrome and Edge), WebKit (Safari), and Firefox.
\\nPlaywright was specifically built for end-to-end testing, and it allows developers to test their applications across multiple browsers, operating systems, and devices. Playwright ensures that CSS renders consistently across different environments, whether running in a local environment or in a CI/CD pipeline.
\\nPlaywright provides several powerful features that make it an excellent tool for testing CSS across different browsers:
\\nInstalling Playwright is straightforward. You can install it using npm with the following command:
\\nnpm install @playwright/test\\n\\n
To set up Playwright in a new project, run:
\\nnpm init playwright@latest\\n\\n
This command initializes Playwright, creates necessary configuration files, and installs browsers needed for testing.
\\nOnce you install Playwright, you can start writing tests to check for CSS consistency across different browsers. Here’s an example test that captures screenshots in Chromium, WebKit, and Firefox to compare rendering differences:
\\nconst { test, expect } = require(\'@playwright/test\');\\ntest(\'Check CSS rendering across browsers\', async ({ page }) => {\\n await page.goto(\'https://example.com\');\\n // Capture screenshots for comparison\\n await page.screenshot({ path: `chromium.png`, fullPage: true });\\n await page.screenshot({ path: `webkit.png`, fullPage: true });\\n await page.screenshot({ path: `firefox.png`, fullPage: true });\\n // Example: Verify if a specific element is visible\\n const button = await page.locator(\'.my-button\');\\n await expect(button).toBeVisible();\\n});\\n\\n
The above script is written in JavaScript and is designed to automate cross-browser CSS testing.
\\nIt starts by importing the test and expected functions from Playwright. Then, it defines a test named “Check CSS rendering across browsers.”
Inside this test, it opens a browser page and navigates to https://example.com. After loading the page, it captures full-page screenshots for three different browsers: Chromium, WebKit, and Firefox. Next, it locates a button with the class .my-button
on the page. Finally, it checks if the button is visible using Playwright’s expect function:
Selenium WebDriver is an open source automation framework that allows developers to programmatically control web browsers for testing purposes. It enables cross-browser testing by automating interactions with major browsers like Chrome, Firefox, Edge, and Safari. Selenium WebDriver is widely used for functional UI testing, browser compatibility testing, and automating repetitive web tasks.
\\nSelenium WebDriver makes it easy to test how your web application behaves across different browsers. It automates real user interactions like clicking buttons, filling out forms, and navigating between pages so you can catch layout issues, broken functionality, or unexpected behavior before users do.
\\n1. Install Selenium WebDriver
\\nInstall Selenium in your preferred programming language. For example, in Python:
\\npip install selenium\\n\\n
For Java, add the Selenium dependency in pom.xml (Maven):
\\n<dependency>\\n <groupId>org.seleniumhq.selenium</groupId>\\n <artifactId>selenium-java</artifactId>\\n <version>4.29.0</version>\\n</dependency>\\n\\n
For JavaScript (Node.js):
\\nnpm install selenium-webdriver\\n\\n
2. Download the browser driver
\\nEach browser requires a specific WebDriver. Download and set up the appropriate Selenium web driver:
\\nOnce installed, you can start writing Selenium tests to check UI consistency across different browsers. Here’s an example test using Python that validates a button’s visibility across browsers:
\\nfrom selenium import webdriver\\nfrom selenium.webdriver.common.by import By\\nfrom selenium.webdriver.chrome.service import Service\\nfrom webdriver_manager.chrome import ChromeDriverManager\\n# Launch browsers\\nbrowsers = [\'chrome\', \'firefox\', \'edge\']\\nfor browser in browsers:\\n if browser == \'chrome\':\\n driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()))\\n elif browser == \'firefox\':\\n driver = webdriver.Firefox()\\n elif browser == \'edge\':\\n driver = webdriver.Edge()\\n driver.get(\\"https://example.com\\")\\n # Capture screenshots for comparison\\n driver.save_screenshot(f\\"{browser}.png\\")\\n # Verify if a button is visible\\n button = driver.find_element(By.CLASS_NAME, \\"my-button\\")\\n assert button.is_displayed(), f\\"{browser}: Button is not visible\\"\\n driver.quit()\\n\\n
The script above launches Chrome, Firefox, and Edge browsers. Then it navigates to https://example.com. It then takes a screenshot for visual comparison. After that, it checks if a button with the class .my-button
is visible. After which, it closes the browser after execution.
PostCSS and Stylelint work together to keep your CSS clean, error-free, and compatible across browsers.
\\n\\nPostCSS processes your styles using JavaScript plugins. One of its most important plugins, Autoprefixer, automatically adds missing vendor prefixes so your CSS works consistently in different browsers without you having to write extra code.
\\nStylelint is a modern CSS linter that catches errors, enforces best practices, and ensures your styles follow a consistent structure. It helps prevent common mistakes and keeps your code easy to maintain.
\\nBy using both tools, you can automate CSS fixes, catch invalid properties early, and make sure your styles work seamlessly across all browsers.
\\nWe have established that it can be tedious to handle CSS prefixes and ensure compatibility across different browsers.
\\nHere’s how PostCSS and Stylelint simplify this process:
\\nwebkit-
, -moz-
) based on browser support. Developers no longer need to remember which prefixes are required for different propertiesGetting started with PostCSS and Stylelint is straightforward. You can install them using npm with the following command:
\\nnpm install postcss autoprefixer stylelint\\n\\n
To configure Autoprefixer, create a postcss.config.js
file in your project and add:
module.exports = {\\n plugins: [\\n require(\'autoprefixer\')\\n ]\\n};\\n\\n
For Stylelint, add a .stylelintrc.json
configuration file:
{\\n \\"extends\\": \\"stylelint-config-standard\\",\\n \\"rules\\": {\\n \\"indentation\\": 2,\\n \\"max-nesting-depth\\": 3\\n }\\n}\\n\\n
Unlike traditional CSS preprocessors like Sass or Less, PostCSS operates at the post-processing level, modifying raw CSS based on plugin rules rather than requiring an entirely different syntax. When paired with Stylelint, it creates a powerful workflow that not only ensures cross-browser compatibility but also enforces styling consistency across a team.
\\n\\nBrowserStack is a testing platform that allows developers to test websites and mobile applications on real browsers and devices. It provides a cloud-based infrastructure that supports manual and automated testing across multiple environments. BrowserStack helps teams release high-quality applications faster with an emphasis on cross-browser testing, real device testing, and AI-powered insights.
\\nBrowserStack offers a wide range of features that streamline testing workflows and improve test coverage. It enables teams to:
\\nGetting started with BrowserStack is simple. You can sign up for a free trial and integrate it into your test suite using tools like Selenium, Cypress, Playwright, and Appium. For automated testing, install the BrowserStack SDK with:
\\nnpm install -g browserstack-cli\\n\\n
To run Selenium tests, set up your credentials and execute tests using the following command:
\\nbrowserstack --username <your-username> --key <your-access-key> run tests\\n\\n
BackstopJS is another open source solution used to detect unintended CSS changes across different browsers:
\\nBackstopJS captures screenshots of a web application before and after a change, then compares them to detect visual differences. This automated approach eliminates the need for manual inspection (which can be time-consuming and prone to human error). Instead of sifting through multiple browser versions and devices to catch inconsistencies, BackstopJS provides a clear visual report highlighting areas that have changed.
\\nYou can use BackstopJS to:
\\nSetting up BackstopJS is straightforward. You can install it via npm with the following command:
\\nnpm install -g backstopjs\\n\\n
Then, initialize a new BackstopJS project:
\\nbackstop init\\n\\n
When choosing a cross-browser testing tool, it’s essential to consider the type of testing you need to perform, the level of automation required, and the specific browsers, devices, or platforms you need to support. Below is a comparison of the testing tools we have highlighted so you can understand which one will be right for your use case.
\\nFeature | \\nPlaywright | \\nSelenium WebDriver | \\nPostCSS + Stylelint | \\nBrowserStack | \\nBackstopJS | \\n
---|---|---|---|---|---|
Use case | \\nEnd-to-end automation testing | \\nFunctional testing across multiple browsers | \\nCSS linting and styling checks | \\nCapturing webpages as images/PDFs | \\nVisual regression testing | \\n
Automation level | \\nFully automated | \\nFully automated | \\nStatic code analysis (CSS) | \\nManual or automated | \\nAutomated (visual) | \\n
Supported browsers | \\nChrome, Firefox, Safari, Edge | \\nChrome, Firefox, Safari, Edge, IE | \\nCSS-focused (not browser-specific) | \\nChromium-based (Chrome, Edge) | \\nChrome, Firefox | \\n
Best for | \\nUI automation and performance testing | \\nFunctional testing in various browsers | \\nCSS code quality enforcement | \\nGenerating screenshots/PDFs | \\nDetecting UI layout changes | \\n
CI/CD integration | \\nYes (Jenkins, GitHub Actions, CircleCI) | \\nYes (Selenium Grid, Jenkins) | \\nYes (Prettier, ESLint) | \\nYes (Node.js-based) | \\nYes (Jenkins, GitHub Actions) | \\n
Ease of setup | \\nModerate | \\nModerate to difficult | \\nEasy | \\nEasy | \\nModerate | \\n
Parallel execution | \\nYes | \\nYes (via Selenium Grid) | \\nNo | \\nNo | \\nYes | \\n
Headless mode | \\nYes | \\nYes | \\nNo | \\nYes | \\nYes | \\n
Best for teams using | \\nJavaScript, TypeScript, Python | \\nJava, Python, C#, JavaScript | \\nFrontend developers | \\nPHP, Node.js | \\nJavaScript, frontend teams | \\n
Learning curve | \\nModerate | \\nSteep | \\nEasy | \\nEasy | \\nModerate | \\n
Go with Playwright or Selenium WebDriver if you’re testing web app functionality:
\\nIf you’re not testing interactions but want to enforce consistent styles, go for PostCSS + Stylelint:
\\nUse BrowserStack if you want to capture web pages as images or PDFs:
\\nGo for BackstopJS if you want visual regression testing that can catch unwanted layout shifts and style changes:
\\nChoosing the right tool depends on what you’re testing. Are you checking for functional correctness, UI consistency, or styling errors?
\\nIn this article, we explored five of the best open source tools for cross-browser CSS testing. We broke down how Playwright and Selenium WebDriver help automate functional testing across multiple browsers, why PostCSS + Stylelint is essential for enforcing consistent styles, and how BrowserStack makes it easy to generate high-quality screenshots and PDFs. We also looked at BackstopJS, which catches UI inconsistencies before they reach production.
\\nThese tools are free, powerful, and built to streamline your workflow. Why not try them out and see which one fits your development process best?
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nlazy()
\\n Fetching data in React applications has traditionally required useEffect
and state management, often leading to boilerplate code and UI flickering. With the introduction of React Suspense, handling asynchronous operations like data fetching has become more efficient and declarative.
React Suspense allows components to pause rendering until a specific condition, such as data availability, is met. In React v18+, it fully integrates with concurrent rendering, simplifying async tasks like data fetching, code splitting, and lazy loading without manual state management.
\\nIn this article, we’ll explore how to use Suspense for data fetching, how it works under the hood, and why it’s an important tool for modern React development.
\\nReact Suspense is a built-in feature in React for handling asynchronous operations. It enables components to temporarily suspend rendering while waiting for asynchronous data and display a fallback UI, such as a loading spinner, until the data becomes available.
\\nIt is important to note that React Suspense is neither a data-fetching library like react-async, nor a state management tool like Redux. It simply allows developers to declaratively render a fallback UI while a component waits for an asynchronous operation, such as a network request, to complete.
\\nAs we’ll see later, React Suspense helps synchronize loading states across multiple components, enhancing the user experience by ensuring a seamless transition while waiting for asynchronous data. It accomplishes this in a non-intrusive way, allowing developers to integrate it without requiring a complete rewrite of existing applications.
\\nIn this tutorial, we will use the DummyJSON API as a sample endpoint for our application.
\\nEditor’s note: This article was updated by Popoola Temitope in April 2025 to cover React 18 and 19 updates to Suspense, explore how Suspense integrates with Next.js server components, and include a discussion on lazy loading components using React lazy()
.
Let’s look at the code below that fetches the to-do
list from an endpoint and displays it on the user interface:
import { useEffect, useState } from \\"react\\";\\nfunction App() {\\n const [todos, setTodos] = useState([]);\\n useEffect(() => {\\n fetch(\\"https://dummyjson.com/todos?limit=300\\")\\n .then((res) => res.json())\\n .then((data) => {\\n setTodos(data.todos);\\n })\\n }, []);\\n return (\\n <div>\\n <h1>To-Do List</h1>\\n <ul>\\n {todos.map((todo) => (\\n <li key={todo.id}>\\n {todo.todo} {todo.completed ? \\"✅\\" : \\"❌\\"}\\n </li>\\n ))}\\n </ul>\\n </div>\\n );\\n}\\nexport default App;\\n\\n
The code above is one of the common ways to fetch asynchronous data. However, its limitation is that users have to wait while the to-do list loads without any indication that data is being fetched.
\\nWith React Suspense, we can easily display an alternative UI while the main asynchronous component or data is loading. The syntax below illustrates how to use React Suspense:
\\nimport React, { Suspense } from \'react\'\\n\\n// ...\\n\\n<Suspense fallback={<preloading/>}>\\n <Todo-list />\\n</Suspense>\\n\\n
In the syntax above, we import Suspense from React and wrap the asynchronous component inside the <Suspense>
component. This tells React that while the asynchronous component is loading, it should render the component specified in the fallback attribute.
Another important thing to note is that the fallback property passed to React Suspense determines what is rendered while waiting for the network call to complete. This could be a spinner, a skeleton loader, or even nothing. React will display the specified fallback value until the network request is complete.
\\nIn React 18, Suspense became a stable part of concurrent rendering, enabling features like streaming server-side rendering, selective hydration, and integration with frameworks such as Next.js and Remix. However, it relied on the fetch-on-render
pattern, where components fetch data during rendering. This often led to network waterfalls, as nested components waited for their parents to load, degrading performance.
React 19 addresses these limitations by adding native support for data fetching with Suspense, reducing reliance on external libraries. It also improves error handling, enhances server component support, and streamlines asynchronous loading for more efficient rendering and a smoother user experience.
\\nWhenever a React component needs data from an API, it typically fetches it by making a network request to the API endpoint. This is where different data fetching approaches come into play.
\\nLet’s explore three common methods used in React.
Using the fetch-on-render approach, the network request is triggered within the component itself after it has mounted. This approach is called fetch-on-render
because the request is not initiated until the component has rendered:
import { useState, useEffect } from \\"react\\";\\nconst UserComponent = () => {\\n const [userDetails, setUserDetails] = useState(null);\\n useEffect(() => {\\n fetch(\\"https://dummyjson.com/users/1\\")\\n .then((response) => response.json())\\n .then((data) => setUserDetails(data))\\n }, []);\\n if (!userDetails) return <p>Fetching user details...</p>;\\n return (\\n <div className=\\"app\\">\\n <div>\\n <h4>Welcome, {userDetails.firstName}</h4>\\n <p>{userDetails.email}</p>\\n </div>\\n </div>\\n );\\n};\\nexport default UserComponent;\\n\\n
A major drawback of this approach is the network waterfall problem. This occurs when multiple components independently make their own asynchronous fetch requests. If this component renders another component with a similar request, the nested structure causes sequential API calls, leading to performance issues due to delayed data fetching.
\\nThe fetch-then-render approach allows us to make an asynchronous request before the component is rendered or mounted. This approach helps ensure that asynchronous data is fetched completely before rendering the component.
\\nThe code below shows how to implement fetch-then-render in a React application:
\\nimport { useState, useEffect } from \\"react\\";\\nconst UserComponent = () => {\\n const [userDetails, setUserDetails] = useState(null);\\n const fetchData =()=>{\\n fetch(\\"https://dummyjson.com/users/1\\")\\n .then((response) => response.json())\\n .then((data) => setUserDetails(data))\\n } \\n useEffect(() => {\\n fetchData()\\n }, []);\\n if (!userDetails) return <p>Fetching user details...</p>;\\n return (\\n <div className=\\"app\\">\\n <div>\\n <h4>Welcome, {userDetails.firstName}</h4>\\n <p>{userDetails.email}</p>\\n </div>\\n </div>\\n );\\n};\\nexport default UserComponent;\\n\\n
In this example, fetchData()
retrieves API data when the component mounts. A major drawback of this approach is that multiple API requests can increase rendering time, as the component must wait for all requests to complete before rendering. This can lead to slower page loads and reduced responsiveness.
Traditional data-fetching patterns aren’t always performant and often struggle with handling asynchronous calls efficiently. React Suspense addresses this by enabling render-as-you-fetch, where rendering begins immediately after a network request is triggered.
\\nUnlike the fetch-then-render pattern, which waits for a response before rendering, render-as-you-fetch progressively updates the UI as data is retrieved. Let’s look at some code:
\\nimport { Suspense } from \\"react\\";\\nconst fetchData = () => {\\n let data;\\n let promise = fetch(\\"https://dummyjson.com/users/1\\")\\n .then((response) => response.json())\\n .then((json) => (data = json));\\n return {\\n read() {\\n if (!data) {\\n throw promise; \\n }\\n return data;\\n },\\n };\\n};\\nconst userData = fetchData(); \\nconst UserComponent = () => (\\n <Suspense fallback={<p>Fetching user details...</p>}>\\n <UserWelcome />\\n </Suspense>\\n);\\nconst UserWelcome = () => {\\n const userDetails = userData.read();\\n return (\\n <div className=\\"app\\">\\n <div>\\n <h4>Welcome, {userDetails.firstName}</h4>\\n <p>{userDetails.email}</p>\\n </div>\\n </div>\\n );\\n};\\nexport default UserComponent;\\n\\n
When UserComponent
mounts, it tries to render UserWelcome
, which calls userData.read()
. If the data isn’t available, read()
throws a promise that React Suspense catches, prompting React to render the fallback UI. Once the data resolves, React re-renders UserWelcome
with the fetched details.
While client-side data fetching has existed in React for years, the introduction of React Suspense is a valuable addition to data-fetching techniques. From a user’s perspective, React Suspense significantly enhances the experience by providing subtle loaders that not only offer immediate UI feedback but also improve the Cumulative Layout Shift (CLS) score substantially.
\\nFrom a developer’s perspective, the Suspense pattern promotes a more reactive approach rather than a purely declarative one. It eliminates the need to manually handle errors and loading states for each asynchronous call within the application.
\\nThe React Suspense API is gaining popularity because it enables more reactive and maintainable code, leading to better UX and improved performance.
\\nReact Query and React Suspense can work together to improve data fetching and UI responsiveness. While React Query provides powerful features like caching, automatic retries, and background refetching, React Suspense helps manage loading states in a more declarative way.
\\n\\nLet’s install React Query as a dependency in our React application using the command below:
\\nnpm i @tanstack/react-query\\n\\n
In our main component, we’ll import useQuery
, QueryClient
, and QueryClientProvider
from @tanstack/react-query
and use them to fetch data from an API with React Suspense enabled, as shown in the code below:
import { useQuery, QueryClient, QueryClientProvider } from \'@tanstack/react-query\';\\nimport { Suspense } from \'react\';\\nconst queryClient = new QueryClient({\\n defaultOptions: {\\n queries: {\\n suspense: true, // Enables Suspense for queries\\n },\\n },\\n});\\nconst fetchUser = async () => {\\n const res = await fetch(\'https://dummyjson.com/users/1\'); \\n if (!res.ok) throw new Error(\'Network response was not ok\');\\n return res.json();\\n};\\nfunction UserComponent() {\\n const { data } = useQuery({ queryFn: fetchUser });\\n return (\\n <div>\\n <h2>{data.firstName} {data.lastName}</h2>\\n <p>Email: {data.email}</p>\\n <p>Age: {data.age}</p>\\n </div>\\n );\\n}\\nexport default function App() {\\n return (\\n <QueryClientProvider client={queryClient}>\\n <Suspense fallback={<div>Loading user data...</div>}>\\n <UserComponent />\\n </Suspense>\\n </QueryClientProvider>\\n );\\n}\\n\\n
lazy()
React provides React lazy()
as a built-in feature for dynamically loading components only when needed, enhancing performance by reducing the initial bundle size. When combined with React Suspense, React lazy()
ensures that components load smoothly, with a fallback UI displayed until they are ready to render.
To demonstrate how to use React Suspense with React lazy()
, let’s create a UserDetails
component that fetches user information from an API endpoint. Start by creating a new file named UserWelcome.js
and add the following code to it:
const fetchData = () => {\\n let data;\\n let promise = fetch(\\"https://dummyjson.com/users/1\\")\\n .then((response) => response.json())\\n .then((json) => (data = json));\\n return {\\n read() {\\n if (!data) {\\n throw promise;\\n }\\n return data;\\n },\\n };\\n };\\n const userData = fetchData();\\n const UserWelcome = () => {\\n const userDetails = userData.read();\\n return (\\n <div className=\\"app\\">\\n <div>\\n <h4>Welcome, {userDetails.firstName}</h4>\\n <p>{userDetails.email}</p>\\n </div>\\n </div>\\n );\\n };\\n export default UserWelcome;\\n\\n
Instead of loading the UserWelcome
component upfront, we can lazy load it using React lazy()
, ensuring it is fetched only when needed. To manage the component’s loading state when using React lazy()
, we can wrap it with React Suspense, as used in the code below:
import { Suspense, lazy } from \\"react\\";\\nconst UserWelcome = lazy(() => import(\\"./UserWelcome\\"));\\nconst UserComponent = () => (\\n <Suspense fallback={<p>Fetching user details...</p>}>\\n <UserWelcome />\\n </Suspense>\\n);\\nexport default UserComponent;\\n\\n
Using React lazy()
with React Suspense helps optimize the initial page load time and enhances the user experience.
Using React Suspense and the render-as-you-fetch approach, we will build a simple app that fetches user information and a to-do list from an API and renders them in our React application.
\\nTo get started, let’s create a UserDetails
component that fetches user data from the https://dummyjson.com/users/1
endpoint and renders the user details in the component UI. To do this, inside the src folder, create a file named UserDetails.js
and add the following code to it:
import React, { useEffect, useState } from \\"react\\";\\nexport default function UserDetails() {\\n const [user, setUser] = useState(null);\\n useEffect(() => {\\n fetch(\\"https://dummyjson.com/users/1\\")\\n .then((res) => res.json())\\n .then((data) => setUser(data));\\n }, []);\\n if (!user) return null;\\n return (\\n <>\\n <div className=\\"mb-3\\">\\n <p><strong>User:</strong> {user.firstName} {user.lastName}</p>\\n <p><strong>Email:</strong> {user.email}</p>\\n </div>\\n <h5 className=\\"mb-5\\">Here is your todo list for today.</h5>\\n </>\\n );\\n}\\n\\n
Next, let’s create a Todos component that fetches the to-do list from the https://dummyjson.com/todos
endpoint and displays the records in the component’s UI, just as we did for the UserDetails
component.
To do this, create a new file named Todos.js
and add the following code to it:
import React, { useEffect, useState } from \\"react\\";\\n\\nexport default function Todos() {\\n const [todos, setTodos] = useState([]);\\n useEffect(() => {\\n fetch(\\"https://dummyjson.com/todos?limit=10\\")\\n .then((res) => res.json())\\n .then((data) => setTodos(data.todos));\\n }, []);\\n if (todos.length === 0) return null;\\n return (\\n <div>\\n <h4 className=\\"mb-2\\">Todos:</h4>\\n <ul className=\\"list-group\\">\\n {todos.map((todo) => (\\n <li key={todo.id} className=\\"list-group-item d-flex justify-content-between align-items-center\\">\\n {todo.todo}\\n <span>{todo.completed ? \\"✅\\" : \\"❌\\"}</span>\\n </li>\\n ))}\\n </ul>\\n </div>\\n );\\n}\\n\\n
Now that all our React components are in place, let’s explore how to manage the rendering order using React Suspense with lazy()
. To optimize component loading time with React lazy()
and handle loading states using React Suspense for a better user experience, open the App.js
file and add the following code:
import React, { Suspense, lazy } from \\"react\\";\\nconst UserDetails = lazy(() => import(\\"./UserDetails\\"));\\nconst Todos = lazy(() => import(\\"./Todos\\"));\\nexport default function App() {\\n return (\\n <div className=\\"d-flex justify-content-center align-items-center vh-100\\" style={{ backgroundColor: \\"#dbeeff\\",display:\\"flex\\" }}>\\n <div className=\\"card shadow-lg p-4 rounded-4 text-center\\" style={{ maxWidth: \\"500px\\", width: \\"100%\\", background: \\"#fff\\",margin:\\"100px\\" }}>\\n <h2 className=\\"mb-3\\">Simple Todo</h2>\\n <Suspense fallback={<p>Loading user details...</p>}>\\n <UserDetails />\\n </Suspense>\\n <Suspense fallback={<p>Loading Todos...</p>}>\\n <Todos />\\n </Suspense>\\n </div>\\n </div>\\n );\\n}\\n\\n
Imagine if the Todos
component retrieves its data first. You start going through the list, only for the UserDetails
component to load a little later. The newly rendered content would push the existing to-do list down in an awkward way, potentially disorienting your users:
If you want the To-dos component to render only after the UserDetails
component has finished rendering, you can nest the React Suspense component around the Todos component like this:
<Suspense fallback={<p>Loading user details...</p>}>\\n <UserDetails />\\n <Suspense fallback={<p>Loading Todos...</p>}>\\n <Todos />\\n </Suspense>\\n</Suspense>\\n\\n
This will cause React to render the components in the order they appear in your code, regardless of which one gets its data first:
\\nYou can see how easy it is to organize your application’s loading states, compared to manually managing isLoading
variables. A top-down loading approach is much more efficient.
Error boundaries are React components that catch JavaScript errors in their child component tree, log the errors, and display a fallback UI instead of crashing the whole application. They help improve user experience by gracefully handling unexpected errors.
\\nWe can enhance error handling in our application by integrating error boundaries with React Suspense. While Suspense is primarily used for handling asynchronous operations, it does not inherently catch errors. Instead, we use error boundaries to handle errors that occur within the Suspense tree, ensuring that failures do not break the rest of the application.
\\nIn a typical React Suspense pattern, we often work with async operations that return Promises. If a Promise rejects, the Suspense boundary alone does not handle the error. This is where an error boundary is needed—to gracefully manage failed Promise states and display a fallback UI when necessary.
\\nReact only supports error boundaries in class components, so we need to create one by creating a new file named ErrorBoundary.js
inside the src
folder and then add the code below:
class ErrorBoundary extends React.Component {\\n constructor(props) {\\n super(props);\\n this.state = { hasError: false };\\n }\\n\\n static defaultProps = {\\n fallback: <h1>Something went wrong.</h1>,\\n };\\n\\n static getDerivedStateFromError(error) {\\n return { hasError: true };\\n }\\n\\n componentDidCatch(error, errorInfo) {\\n console.log(error, errorInfo);\\n }\\n\\n render() {\\n if (this.state.hasError) {\\n return this.props.fallback;\\n }\\n\\n return this.props.children;\\n }\\n}\\n\\n
Next, let’s import the ErrorBoundary
component and wrap our React Suspense inside it to handle Promise failures that may occur when loading asynchronous components. To do that, update your main component with the following code:
import React, { Suspense, lazy } from \\"react\\";\\nimport ErrorBoundary from \\"./ErrorBoundary\\";\\nconst UserDetails = lazy(() => import(\\"./UserDetails\\"));\\n\\nexport default function App() {\\n return (\\n <ErrorBoundary fallback={<p>An error occurred while fetching user details...</p>}>\\n <Suspense fallback={<p>Loading user details...</p>}>\\n <UserDetails />\\n </Suspense>\\n </ErrorBoundary>\\n );\\n}\\n\\n
By wrapping React Suspense inside the ErrorBoundary
component, our application can effectively catch errors that occur during asynchronous operations, preventing crashes and ensuring a smoother user experience:
\\n
In this article, we explored the React Suspense component and various data-fetching approaches in React. We also built a simple app that uses React Suspense for data fetching.
\\nThe newly updated React documentation is a great resource for learning about data fetching from a server-side perspective. However, for client-heavy interactions, you can always apply the fetching patterns we discussed above.
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nPromise
-based toasts in React-Toastify\\n onClose
callback\\n There are several toast libraries in the React ecosystem. In this article, we will explore how to use React-Toastify in a React project.
\\nToast notifications, or toast messages, are unobtrusive, in-app pop-up messages that provide users with feedback about an operation. Toasts usually disappear after a given time; therefore, removing them doesn’t require any user action. However, you can also close a toast notification before the expiration of the timeout.
\\nThe styling and positioning of a toast largely depend on the purpose and nature of the user feedback. For example, a notification that indicates success is styled differently from a warning or an error notification.
\\nThe feedback that toast notifications provide can be messages of success, warning, or error, as shown in the image below:
\\nReact-Toastify is a free, popular, and MIT-licensed package that you can use to add toast notifications to your React application. There are several other similar toast libraries in the React ecosystem.
\\nEditor’s note: This article was last updated by Chizaram Ken in April 2025 to introduce changes related to React-Toastify v11 and provide additional real-world use cases for React-Toastify.
\\nUse any of the commands below to install React-Toastify in a React project:
\\n# npm\\nnpm install react-toastify\\n\\n# yarn\\nyarn add react-toastify\\n\\n
After installation, import the ToastContainer
component and the toast
object as shown in the example below. React-Toastify also has a CSS file that you must import:
import { ToastContainer, toast } from \'react-toastify\';\\nimport \'react-toastify/dist/ReactToastify.css\';\\n\\n
In this section, you will learn how to use React-Toastify to style toast messages. If you haven’t, start by creating a React app.
\\nAs toast messages are notifications you can use to provide feedback to the user, they can be displayed on user login success, login error, or when a network request succeeds, fails, or times out.
\\nIn your App.js
file, import react-toastify
and its CSS file and invoke the toast.success
function with the notification message like so:
import React from \\"react\\";\\nimport { ToastContainer, toast } from \\"react-toastify\\";\\nimport \\"react-toastify/dist/ReactToastify.css\\";\\n\\nfunction App() {\\n const showToastMessage = () => {\\n toast.success(\\"Success Notification !\\", {\\n position: \\"top-right\\"\\n });\\n };\\n\\n return (\\n <div>\\n <button onClick={showToastMessage}>Notify</button>\\n <ToastContainer />\\n </div>\\n );\\n}\\n\\nexport default App;\\n\\n
Notice we also rendered the ToastContainer
in the code above. This container wraps our toast pop-ups. Without it, the toast pop-ups won’t be displayed.
When you click the Notify
button, the code above will display a toast similar to what you see below:
By default, all toasts are positioned at the top right of the page. This position can be changed by assigning a new position to the toast. React-Toastify allows for six positions:
\\ntop-right
top-center
top-left
bottom-right
bottom-center
bottom-left
Depending on where you want the toast message, you can set its position like so:
\\ntoast.success(\\"Success Notification !\\", {\\n position: \\"top-right\\",\\n});\\n\\ntoast.success(\\"Success Notification !\\", {\\n position: \\"top-center\\",\\n});\\n\\ntoast.success(\\"Success Notification !\\", {\\n position: \\"top-left\\",\\n});\\n\\ntoast.success(\\"Success Notification !\\", {\\n position: \\"bottom-right\\",\\n});\\n\\ntoast.success(\\"Success Notification !\\", {\\n position: \\"bottom-left\\",\\n});\\n\\ntoast.success(\\"Success Notification !\\", {\\n position: \\"bottom-center\\",\\n});\\n\\n
The image below shows a toast message in all six possible locations on the webpage:
\\nSimilarly to setting your toast message’s position, you can use React-Toastify to specify different types of toast messages to better understand the information displayed and improve user experience.
\\nThis technique uses different styling for each message type to make it easier to quickly understand the information and its intent. For example, a red-colored toast UI typically implies a warning or error message, and a green-colored message typically implies a successful response.
\\nYou can use specific toast functions for the different toast message variants. Add the following changes in the click event handler in your App.js
file:
toast.success(\\"Success Notification !\\", {\\n position: \\"top-right\\",\\n});\\n\\ntoast.error(\\"Error Notification !\\", {\\n position: \\"top-center\\",\\n});\\n\\ntoast.warning(\\"Warning Notification !\\", {\\n position: \\"top-left\\",\\n});\\n\\ntoast.info(\\"Information Notification !\\", {\\n position: \\"bottom-center\\",\\n});\\n\\ntoast(\\"Default Notification !\\", {\\n position: \\"bottom-left\\",\\n});\\n\\ntoast(\\"Custom Style Notification with css class!\\", {\\n position: \\"bottom-right\\",\\n className: \\"foo-bar\\",\\n});\\n\\n
The code above should display the toast messages below:
\\nThe last toast notification at the bottom right of the image above is a custom toast. Unlike the others, we added a className
to it. Let’s learn more about custom toast messages in React-Toastify.
A custom toast allows you to implement the toast UI styling that matches your brand color, website, or application theme.
\\n\\nTo style your toast message, first assign it a className
as in the example below:
toast(\\"This is a custom toast Notification!\\", {\\n position: \\"top-left\\",\\n className: \\"toast-message\\",\\n});\\n\\n
Next, use the className
selector to apply the styles in your CSS file:
.toast-message {\\n background: darkblue;\\n color: #fff;\\n font-size: 20px;\\n width: 34vw;\\n padding: 30px 20px;\\n}\\n\\n
With the styles specified in the example above, you should see the following result:
\\nPromise
-based toasts in React-ToastifyIn addition to the toast variants highlighted above, you can also use React-Toastify to create and display Promise
-based notifications. You can perform asynchronous operations such as network requests, and use these toast notifications to display a success or error message when the operation is complete.
To create Promise
-based toasts, add the following to your App.js
file:
useEffect(() => {\\n const myPromise = new Promise((resolve) =>\\n fetch(\\"https://jsonplaceholder.typicode.com/posts/1\\")\\n .then((response) => response.json())\\n .then((json) => setTimeout(() => resolve(json), 3000))\\n );\\n\\n toast.promise(myPromise, {\\n pending: \\"Promise is pending\\",\\n success: \\"Promise Loaded\\",\\n error: \\"error\\",\\n });\\n}, []);\\n\\n
In the toast.promise
function call, we set the pending
, success
, and error
messages. The pending
message will display as the fetch executes. Depending on the outcome, either a success
or error
message will display afterward:
You can also add a custom icon to your toast notification depending on the type. To add a custom icon, let’s look at the code below:
\\nconst CustomIcon = ({ isLoading, type }) => {\\n if (isLoading) return <Spinner />;\\n\\n switch (type) {\\n case \\"success\\":\\n return <span>✅</span>;\\n case \\"error\\":\\n return <span>❌</span>;\\n case \\"warning\\":\\n return <span>⚠️</span>;\\n case \\"info\\":\\n return <span>ℹ️</span>;\\n default:\\n return <span></span>;\\n }\\n};\\n\\n
The function above accepts three props: theme
, isLoading
, and type
. With these props, we can assign different icons to our toast
types. We can also change the icon color if you see fit. isLoading
checks if toast.promise
is true and active.
Finally, pass the CustomIcon
function to the icon
props in ToastContainer
:
<ToastContainer icon={CustomIcon} />\\n\\n
As you can see from our previous sections and examples, the toast notifications are displayed vertically when a new toast notification is active. We can make them overlap or stack for a better user experience. This way, the notifications won’t take up much space on our viewport.
\\nTo do this, add the stacked
prop to the ToastContainer
:
<<ToastContainer\\n stacked\\n hideProgressBar\\n icon={CustomIcon}\\n position=\\"bottom-right\\"\\n style={{ width: \\"20vw\\" }}\\n />\\n\\n
I added the hideProgressBar
prop so that the toasts look better and less chaotic when they are stacked:
useNotificationCenter
HookThe useNotificationCenter
Hook is a React-Toastify addon introduced in React-Toastify v9. You can use it to build a notification center on top of React-Toastify.
Whenever you invoke any toast variant function — like toast.update
, toast.promise
, toast.info
, etc. — while using the useNotificationCenter
Hook, the toast notification will get added to the toast center.
Before using this Hook, first, import it from react-toastify
addons. You can use it in your component like any other React Hook:
import { useNotificationCenter } from \\"react-toastify/addons/use-notification-center\\";\\n\\nconst App = () => {\\n const { notifications } = useNotificationCenter();\\n return null;\\n};\\n\\n
The useNotificationCenter
Hook returns several methods and properties. Some of these methods include notifications
, clear
, markAllAsRead
, markAsRead
:
notifications
— Gives us access to all the notification items or toast messages that we have in our center. Each notificationItem
in the notifications
array contains data such as the id
, read
status (Boolean), theme
, isLoading
status (Boolean), etc.clear
— Removes all the notifications from the notification centermarkAllAsRead
— Marks all the notifications as read. It changes the value of the read
Boolean property of every notificationItem
from false
to true
. In comparison, markAsRead
only changes the read
Boolean property of one notificationItem
to true
React-Toastify v11 included many exciting updates, but the big focus was on customization. The idea is to show how React-Toastify can be a holistic tool for almost all your notification needs. These aren’t just random ideas; they’re the kind of features that make users stick around. Here are a few other important updates that stand out:
\\nv11 finally adds proper accessibility support, which is a big win for inclusivity. You can now slap an ariaLabel
prop on both ToastContainer
and individual toasts to make them screen-reader-friendly.
Plus, there’s built-in keyboard navigation. Press Alt+T
, and the first visible toast gets focus, letting users tab through its elements (like buttons in a custom toast).
This does make a lot of sense; accessibility shouldn’t just be a box to check. The ariaLabel
also helps with testing (e.g., finding toasts in Cypress), and the keyboard nav is a nice touch for power users. Here’s a quick one from the docs:
toast(\'Hello!\', {\\n ariaLabel: \'Greeting notification\',\\n});\\n\\n<ToastContainer\\n hotKeys={(e) => e.ctrlKey && e.key === \'n\'} // Custom hotkey: Ctrl+N\\n ariaLabel=\\"Notifications Ctrl+N\\"\\n/>;\\n\\n
In my opinion, this now makes React-Toastify feel more mature and ready for serious apps where accessibility matters.
\\n\\nonClose
callbackThe onClose
callback now tells you why a toast was closed. Did the user click it away (reason: true
), or did it auto-close? You can even pass custom reasons from a custom component, which is super handy for complex toasts with multiple actions.
This gives you good control to react differently based on user behavior. For example, if a toast has Reply and Ignore buttons, you can trigger different logic depending on what the user picks. Check this out:
\\n\\"use client\\";\\nimport { toast, ToastContainer } from \'react-toastify\';\\nfunction CustomNotification({ closeToast }) {\\n return (\\n <div className=\\"flex items-center gap-4 p-4 bg-gray-800 rounded-lg\\">\\n <span className=\\"text-white text-base font-medium\\">New message! ✉️</span>\\n <div className=\\"flex gap-2\\">\\n <button\\n onClick={() => closeToast(\'reply\')}\\n className=\\"bg-blue-600 text-white py-1.5 px-3 rounded-md text-sm font-medium hover:bg-blue-700 transition-all hover:-translate-y-0.5 active:translate-y-0 focus:outline-none focus:ring-2 focus:ring-blue-500 focus:ring-offset-2 focus:ring-offset-gray-800\\"\\n >\\n Reply\\n </button>\\n <button\\n onClick={() => closeToast(\'ignore\')}\\n className=\\"bg-gray-500 text-white py-1.5 px-3 rounded-md text-sm font-medium hover:bg-gray-600 transition-all hover:-translate-y-0.5 active:translate-y-0 focus:outline-none focus:ring-2 focus:ring-gray-400 focus:ring-offset-2 focus:ring-offset-gray-800\\"\\n >\\n Ignore\\n </button>\\n </div>\\n </div>\\n );\\n}\\nfunction NotificationTrigger() {\\n const triggerToast = () => {\\n toast(CustomNotification, {\\n onClose: (reason) => {\\n if (reason === \'reply\') {\\n console.log(\'User wants to reply!\');\\n } else if (reason === \'ignore\') {\\n console.log(\'User ignored the message.\');\\n }\\n },\\n });\\n };\\n return (\\n <div className=\\"min-h-screen flex items-center justify-center bg-gradient-to-br from-gray-100 to-gray-200 p-4\\">\\n <div className=\\"bg-white rounded-2xl shadow-xl p-8 w-full max-w-md text-center\\">\\n <h1 className=\\"text-2xl font-bold text-gray-800 mb-4\\">Test Notification</h1>\\n <p className=\\"text-gray-600 mb-6\\">Click below to see the custom toast!</p>\\n <button\\n onClick={triggerToast}\\n className=\\"bg-indigo-600 text-white py-3 px-6 rounded-lg text-lg font-medium hover:bg-indigo-700 transition-all hover:-translate-y-0.5 active:translate-y-0 focus:outline-none focus:ring-2 focus:ring-indigo-500 focus:ring-offset-2\\"\\n >\\n Show Notification\\n </button>\\n </div>\\n <ToastContainer\\n ariaLabel=\\"Custom notification\\"\\n position=\\"top-right\\"\\n autoClose={5000}\\n newestOnTop\\n closeOnClick={false}\\n pauseOnHover\\n className=\\"mt-4\\"\\n />\\n <style jsx global>{`\\n .Toastify__toast {\\n background: transparent;\\n box-shadow: none;\\n padding: 0;\\n min-height: auto;\\n border-radius: 0;\\n }\\n .Toastify__toast-body {\\n margin: 0;\\n padding: 0;\\n width: 100%;\\n }\\n `}</style>\\n </div>\\n );\\n}\\nexport default NotificationTrigger;\\n\\n
This is what it looks like:
\\nThis is a great update. It’s like you giving your notifications their own little decision options.
\\nv11 lets you roll up your own progress bar to your taste, without losing features like autoClose
, pauseOnHover
, or pauseOnFocusLoss
. You simply pass customProgressBar: true
and render your component, which gets an isPaused
prop to sync animations.
This is cool because progress bars are a great way to show time-sensitive actions (like a toast disappearing). Now you can make them match your app’s style perfectly. The docs show how easy it is:
\\nfunction CustomComponent({ isPaused, closeToast }) {\\n return (\\n <div>\\n <span>Processing...</span>\\n <div\\n style={{\\n width: isPaused ? \'100%\' : \'0%\',\\n transition: \'width 8s linear\',\\n background: \'limegreen\',\\n }}\\n onTransitionEnd={() => closeToast()}\\n />\\n </div>\\n );\\n}\\n\\ntoast(CustomComponent, {\\n autoClose: 8000,\\n customProgressBar: true,\\n});\\n\\n
This wasn’t even planned for v11, but it’s such a fun addition!
\\nIn v11, there are some breaking changes to watch for. The useToastContainer
and useToast
Hooks are gone (they were too fiddly anyway), and onClose
/onOpen
callbacks no longer get children
props. The minimal CSS file and SCSS support are out, and some class names (like Toastify__toast-body
) have been axed to simplify the DOM.
These changes make the library leaner and easier to customize, even if they mean a bit of migration work. The simplified DOM structure is a big reason why Tailwind works so well now. The docs warn you upfront, which is nice:
\\n// Old way (v10)\\nimport \'react-toastify/dist/ReactToastify.css\'; // No longer needed!\\n\\n// New way (v11)\\nimport { ToastContainer } from \'react-toastify\';\\n// CSS is auto-injected, just use <ToastContainer />\\n\\n
If you’re upgrading, check your custom styles and Hooks. But honestly, the trade-off is worth it for how much cleaner everything feels.
\\nNow that we understand the useNotificationCenter
Hook along with toast message positions, types, and customization, let’s see how we can use them together in an application.
First, destructure the methods returned by the useNotificationCenter
Hook we went over in the previous sections in your App.js
file:
import React from \'react\';\\nimport { useNotificationCenter } from \'react-toastify/addons/use-notification-center\';\\nimport { toast, ToastContainer } from \'react-toastify\';\\nimport \'react-toastify/dist/ReactToastify.css\';\\n\\nconst App = () => {\\nconst { notifications, clear, markAllAsRead, markAsRead } = useNotificationCenter();\\n}\\n\\n
In the example above, we also imported toast
and ToastContainer
with its CSS file. Let’s declare an event handler that will create a toast when a button is clicked:
import React from \\"react\\";\\nimport { useNotificationCenter } from \\"react-toastify/addons/use-notification-center\\";\\nimport { toast, ToastContainer } from \\"react-toastify\\";\\nimport \\"react-toastify/dist/ReactToastify.css\\";\\n\\nconst App = () => {\\n const { notifications, clear, markAllAsRead, markAsRead } =\\n useNotificationCenter();\\n\\n const showToast = () => {\\n toast(\\"Hello World\\", {\\n data: {\\n title: \\"Hello World Again\\",\\n text: \\"We are here again with another article\\",\\n },\\n });\\n };\\n\\n return (\\n <div>\\n <p>{notifications.length}</p>\\n <button onClick={showToast}>Click me</button>\\n <ToastContainer />\\n </div>\\n );\\n};\\n\\nexport default App;\\n\\n
In the code above, we added a paragraph tag to display the number of toast messages added to the notification center.
\\nClicking the button will create a new toast, and the paragraph text will display the number of toast messages we’ve created:
\\nCool, right? Let’s see what else we can do. As mentioned earlier, you can perform sorting, mapping, and other actions on the notifications
array returned by the useNotificationCenter
Hook.
Copy and paste the code below into the the App.js
file:
import React from \\"react\\";\\nimport { useNotificationCenter } from \\"react-toastify/addons/use-notification-center\\";\\nimport { toast, ToastContainer } from \\"react-toastify\\";\\nimport \\"react-toastify/dist/ReactToastify.css\\";\\n\\nconst App = () => {\\n const { notifications, clear, markAllAsRead, markAsRead } =\\n useNotificationCenter();\\n\\n const showToast = () => {\\n toast(\\"Hello World\\", {\\n data: {\\n title: \\"Hello World Again\\",\\n text: \\"We are here again with another article\\",\\n },\\n });\\n };\\n\\n const showSuccessToast = () => {\\n toast.success(\\"Hello World\\", {\\n data: {\\n title: \\"Success toast\\",\\n text: \\"This is a success message\\",\\n },\\n });\\n };\\n\\n const showErrorToast = () => {\\n toast.error(\\"Hello World\\", {\\n data: {\\n title: \\"Error toast\\",\\n text: \\"This is an error message\\",\\n },\\n });\\n };\\n\\n return (\\n <div>\\n <p>{notifications.length}</p>\\n <button onClick={showToast}>Default</button>\\n <button onClick={showSuccessToast}>Success</button>\\n <button onClick={showErrorToast}>Error</button>\\n <br />\\n <br />\\n <button onClick={clear}>Clear Notifications</button>\\n <button onClick={() => markAllAsRead()}>Mark all as read</button>\\n <ul>\\n {notifications.map((notification) => (\\n <li\\n onClick={() => markAsRead(notification.id)}\\n key={notification.id}\\n style={\\n notification.read\\n ? { background: \\"green\\", color: \\"silver\\", padding: \\"0 20px\\" }\\n : {\\n border: \\"1px solid black\\",\\n background: \\"navy\\",\\n color: \\"#fff\\",\\n marginBottom: 20,\\n cursor: \\"pointer\\",\\n padding: \\"0 20px\\",\\n }\\n }\\n >\\n <span>id: {notification.id}</span>\\n <p>title: {notification.data.title}</p>\\n <p>text: {notification.data.text}</p>\\n </li>\\n ))}\\n </ul>\\n <ToastContainer />\\n </div>\\n );\\n};\\n\\nexport default App;\\n\\n
Let’s break down the code above.
\\nFirst, we are mapping through the notifications
array, which is an array of notification items, and getting the id
, title
, and text
of our toast messages.
Then, we register onClick
event handlers on the notification items. When a notification item gets clicked, we use the markAsRead
function to mark the item as read. We also change the background color of a notification item to differentiate between read and unread notifications.
The Mark all as read button uses the markAllAsRead
function to change the read
status of all notification items to true
. When this button is clicked, all item backgrounds will change color.
Lastly, the “Clear Notifications” button uses the clear
function to delete or remove all items from the notification center.
Remember, when you invoke any toast variant method — whether toast.success
, toast.error
, toast.update
, or any other type — the toast will be added to the notification center, like so:
Let’s visualize what we are about to build: You’ve got an app where users submit forms, and this creates a kind of post. We will be using the JSON Placeholder API to post our data. When we do this, we want to get updates on whether it was successful or if we ran into errors, using React Toastify.
\\nHere is the code for that:
\\n\\"use client\\"\\nimport { toast, ToastContainer } from \'react-toastify\';\\nimport { useState, useEffect } from \'react\';\\nfunction Posts() {\\n const [title, setTitle] = useState(\'\');\\n const [isSubmitting, setIsSubmitting] = useState(false);\\n const [posts, setPosts] = useState([]);\\n const handleSubmit = async (e) => {\\n e.preventDefault();\\n if (!title.trim()) {\\n toast.warning(\'Please enter a post title!\', { ariaLabel: \'Empty title warning\' });\\n return;\\n }\\n setIsSubmitting(true);\\n const toastId = toast.loading(\'Creating your post...\');\\n try {\\n const response = await fetch(\'https://jsonplaceholder.typicode.com/posts\', {\\n method: \'POST\',\\n headers: { \'Content-Type\': \'application/json\' },\\n body: JSON.stringify({ title, body: \'Post content goes here\', userId: 1 }),\\n });\\n const data = await response.json();\\n if (response.ok) {\\n toast.update(toastId, {\\n render: `Post created with ID: ${data.id} 🎉`,\\n type: \'success\',\\n isLoading: false,\\n autoClose: 3000,\\n ariaLabel: \'Post creation success\',\\n });\\n setTitle(\'\');\\n setPosts([data, ...posts]);\\n } else {\\n toast.update(toastId, {\\n render: `Error: ${response.status} - ${response.statusText}`,\\n type: \'error\',\\n isLoading: false,\\n autoClose: 3000,\\n });\\n }\\n } catch (error) {\\n toast.update(toastId, {\\n render: `Network error: ${error.message}`,\\n type: \'error\',\\n isLoading: false,\\n autoClose: 3000,\\n });\\n } finally {\\n setIsSubmitting(false);\\n }\\n };\\n useEffect(() => {\\n const fetchPosts = async () => {\\n try {\\n const response = await fetch(\'https://jsonplaceholder.typicode.com/posts?_limit=5\');\\n if (response.ok) {\\n const data = await response.json();\\n setPosts(data);\\n } else {\\n toast.error(\'Failed to load posts\', { ariaLabel: \'Fetch posts error\' });\\n }\\n } catch (error) {\\n toast.error(\'Failed to load existing posts\');\\n }\\n };\\n fetchPosts();\\n }, []);\\n return (\\n <div className=\\"min-h-screen bg-gray-100 py-12 px-4 sm:px-6 lg:px-8\\">\\n <div className=\\"max-w-md mx-auto bg-white rounded-xl shadow-md overflow-hidden md:max-w-2xl mb-8\\">\\n <div className=\\"p-8 w-full\\">\\n <div className=\\"uppercase tracking-wide text-sm text-indigo-500 font-semibold mb-1\\">Create New Post</div>\\n <h2 className=\\"block mt-1 text-lg leading-tight font-medium text-black mb-6\\">Share something with the community</h2>\\n <form onSubmit={handleSubmit} className=\\"space-y-6\\">\\n <div>\\n <label htmlFor=\\"title\\" className=\\"block text-sm font-medium text-gray-700\\">\\n Post Title\\n </label>\\n <input\\n type=\\"text\\"\\n id=\\"title\\"\\n value={title}\\n onChange={(e) => setTitle(e.target.value)}\\n placeholder=\\"Enter your post title\\"\\n className=\\"shadow-sm focus:ring-indigo-500 focus:border-indigo-500 block w-full sm:text-sm border-gray-300 rounded-md p-2 border\\"\\n required\\n />\\n </div>\\n <button\\n type=\\"submit\\"\\n disabled={isSubmitting}\\n className={`w-full flex justify-center py-2 px-4 border border-transparent rounded-md shadow-sm text-sm font-medium text-white bg-indigo-600 hover:bg-indigo-700 focus:outline-none focus:ring-2 focus:ring-offset-2 focus:ring-indigo-500 ${isSubmitting ? \'opacity-75 cursor-not-allowed\' : \'\'}`}\\n >\\n {isSubmitting ? \'Submitting...\' : \'Create Post\'}\\n </button>\\n </form>\\n </div>\\n </div>\\n <div className=\\"max-w-md mx-auto md:max-w-2xl\\">\\n <h3 className=\\"text-xl font-bold text-gray-800 mb-4\\">Recent Posts</h3>\\n <div className=\\"space-y-4\\">\\n {posts.length > 0 ? (\\n posts.map((post) => (\\n <div key={post.id} className=\\"bg-white rounded-xl shadow-md p-6 hover:shadow-lg transition-shadow duration-300\\">\\n <h4 className=\\"font-semibold text-lg text-gray-800\\">{post.title}</h4>\\n <p className=\\"mt-2 text-gray-600\\">{post.body}</p>\\n <div className=\\"mt-4 flex justify-between items-center\\">\\n <span className=\\"text-xs text-gray-500\\">Post ID: {post.id}</span>\\n <span className=\\"text-xs text-gray-500\\">User ID: {post.userId}</span>\\n </div>\\n </div>\\n ))\\n ) : (\\n <div className=\\"bg-white rounded-xl shadow-md p-6 text-center text-gray-500\\">\\n No posts available. Create your first post above!\\n </div>\\n )}\\n </div>\\n </div>\\n <ToastContainer\\n position=\\"top-right\\"\\n autoClose={3000}\\n newestOnTop\\n closeOnClick\\n rtl={false}\\n pauseOnFocusLoss\\n draggable\\n pauseOnHover\\n ariaLabel=\\"Post notifications\\"\\n />\\n </div>\\n );\\n}\\nexport default Posts;\\n\\n
When submitting a post, toast.loading
displays Creating your post… to indicate progress, and toast.update
switches to a success message (Post created with ID: X 🎉) or an error message (Error: …) based on the API response, using toastId
for precise updates.
If the title is empty, toast.warning
alerts Please enter a post title! to guide the user:
On component mount, failed post fetches trigger toast.error
with messages like Failed to load posts. The ToastContainer is configured with position top-right
, newestOnTop
, and draggable, leveraging v11’s accessibility with ariaLabel
and flexible options to ensure notifications are clear and interactive. This complements the form and post list UI.
We will use React-Toastify, to display more friendly notifications like Welcome back! or You’re logged out, see ya!.
\\nIt’s a small touch that makes your app feel more friendly. It’s an easy thing to pull off, but you have to write a lot of CSS for that.
\\nHere is how you will implement that using React-Toastify:
\\n\\"use client\\";\\nimport { toast, ToastContainer } from \'react-toastify\';\\nimport { useState } from \'react\';\\nfunction AuthComponent() {\\n const [isLoggedIn, setIsLoggedIn] = useState(false);\\n const handleLogin = async () => {\\n try {\\n await new Promise((resolve) => setTimeout(resolve, 1000)); // Simulate login\\n setIsLoggedIn(true);\\n toast.success(\'Welcome back, you’re in! 😎\', {\\n ariaLabel: \'Login success\',\\n });\\n } catch (error) {\\n toast.error(\'Login failed, check your credentials! 😕\', {\\n ariaLabel: \'Login error\',\\n });\\n }\\n };\\n const handleLogout = async () => {\\n try {\\n await new Promise((resolve) => setTimeout(resolve, 800));\\n setIsLoggedIn(false);\\n toast.info(\'Logged out—see you soon! 👋\', {\\n ariaLabel: \'Logout confirmation\',\\n onClose: (reason) => {\\n if (reason === true) {\\n console.log(\'User manually closed the logout toast\');\\n }\\n },\\n });\\n } catch (error) {\\n toast.error(\'Logout failed, try again!\', {\\n ariaLabel: \'Logout error\',\\n });\\n }\\n };\\n return (\\n <div className=\\"min-h-screen flex items-center justify-center bg-gradient-to-br from-blue-50 to-gray-100 p-4\\">\\n <div className=\\"bg-white rounded-2xl shadow-xl p-8 w-full max-w-md\\">\\n <h1 className=\\"text-3xl font-bold text-gray-800 mb-4 text-center\\">\\n {isLoggedIn ? \'Welcome!\' : \'Sign In\'}\\n </h1>\\n <p className=\\"text-gray-600 mb-6 text-center\\">\\n {isLoggedIn ? \'Ready to explore? Or take a break.\' : \'Log in to get started!\'}\\n </p>\\n <div className=\\"flex justify-center\\">\\n {isLoggedIn ? (\\n <button\\n onClick={handleLogout}\\n className=\\"flex items-center gap-2 bg-red-500 text-white py-3 px-6 rounded-lg font-medium text-lg hover:bg-red-600 transition-all hover:-translate-y-0.5 active:translate-y-0 focus:outline-none focus:ring-2 focus:ring-red-500 focus:ring-offset-2 disabled:opacity-50 disabled:cursor-not-allowed\\"\\n disabled={false}\\n >\\n <span>🚪</span> Log Out\\n </button>\\n ) : (\\n <button\\n onClick={handleLogin}\\n className=\\"flex items-center gap-2 bg-blue-600 text-white py-3 px-6 rounded-lg font-medium text-lg hover:bg-blue-700 transition-all hover:-translate-y-0.5 active:translate-y-0 focus:outline-none focus:ring-2 focus:ring-blue-600 focus:ring-offset-2 disabled:opacity-50 disabled:cursor-not-allowed\\"\\n disabled={false}\\n >\\n <span>🔑</span> Log In\\n </button>\\n )}\\n </div>\\n </div>\\n <ToastContainer\\n ariaLabel=\\"Auth notifications\\"\\n position=\\"top-right\\"\\n autoClose={3000}\\n newestOnTop\\n closeOnClick\\n pauseOnHover\\n className=\\"mt-4\\"\\n />\\n <style jsx global>{`\\n .Toastify__toast {\\n border-radius: 0.5rem;\\n font-size: 1rem;\\n font-weight: 400;\\n color: #ffffff;\\n padding: 0.75rem 1rem;\\n min-height: 48px;\\n display: flex;\\n align-items: center;\\n box-shadow: 0 4px 12px rgba(0, 0, 0, 0.15);\\n font-family: -apple-system, BlinkMacSystemFont, \'Segoe UI\', Roboto, sans-serif;\\n }\\n .Toastify__toast-body {\\n margin: 0;\\n padding: 0;\\n flex: 1;\\n line-height: 1.4;\\n }\\n .Toastify__toast--success {\\n background: #2f855a;\\n }\\n .Toastify__toast--error {\\n background: #c53030;\\n }\\n .Toastify__toast--info {\\n background: #2b6cb0;\\n }\\n .Toastify__toast-icon {\\n margin-right: 0.5rem;\\n display: flex;\\n align-items: center;\\n }\\n `}</style>\\n </div>\\n );\\n}\\nexport default AuthComponent;\\n\\n
In the code above, when a user clicks Log In, toast.success
shows Welcome back, you’re in! 😎 with a green background.
A logout triggers toast.info
with Logged out, see you soon! 👋 in blue.
Errors during either process trigger toast.error
with messages like Login failed, check your credentials! 😕:
The onClose
callback in the logout toast captures manual dismissals (reason === true)
, showcasing v11’s enhanced callback functionality. The ToastContainer uses ariaLabel
, position: \\"top-right\\"
, and pauseOnHover
, ensuring accessibility and smooth integration, with custom CSS styling the toasts to match the component’s aesthetic.
When running async operations like uploading a file or syncing data, you really do not want to leave your users wondering what’s happening. That would be a bad experience; I’d personally leave that app in this scenario.
\\nIn order to keep people like me, you’ll want to display a toast that says, Hang tight, we’re processing! or All done, you’re good to go! while the operation runs. Here is how we implement this using React-Toastify:
\\n\\"use client\\";\\nimport { toast, ToastContainer } from \'react-toastify\';\\nfunction ImageUploader() {\\n const handleUpload = async (file) => {\\n const uploadToast = toast.loading(\'Uploading your image...\', {\\n ariaLabel: \'Image upload in progress\',\\n });\\n try {\\n await new Promise((resolve) => setTimeout(resolve, 2000));\\n toast.update(uploadToast, {\\n render: \'Image uploaded successfully! 🖼️\',\\n type: \'success\',\\n isLoading: false,\\n autoClose: 3000,\\n ariaLabel: \'Image upload success\',\\n });\\n } catch (error) {\\n toast.update(uploadToast, {\\n render: \'Upload failed, try again!\',\\n type: \'error\',\\n isLoading: false,\\n autoClose: 3000,\\n });\\n }\\n };\\n return (\\n <div className=\\"min-h-screen flex items-center justify-center bg-gradient-to-br from-gray-100 to-gray-200 p-4 font-sans\\">\\n <div className=\\"bg-white rounded-xl shadow-lg p-8 w-full max-w-md text-center\\">\\n <h2 className=\\"text-2xl font-semibold text-gray-900 mb-2 leading-tight break-words\\">Upload Your Image</h2>\\n <p className=\\"text-base text-gray-600 mb-6 leading-relaxed opacity-90\\">Choose a file to share with the world!</p>\\n <label\\n htmlFor=\\"file-upload\\"\\n className=\\"inline-flex items-center gap-2 bg-indigo-600 text-white py-3 px-6 rounded-lg cursor-pointer text-base font-medium hover:bg-indigo-500 transition-all hover:-translate-y-0.5 active:translate-y-0\\"\\n >\\n <span className=\\"text-lg\\">📁</span>\\n Select Image\\n </label>\\n <input\\n id=\\"file-upload\\"\\n type=\\"file\\"\\n className=\\"hidden\\"\\n onChange={(e) => handleUpload(e.target.files[0])}\\n accept=\\"image/*\\"\\n />\\n </div>\\n <ToastContainer\\n ariaLabel=\\"Upload notifications\\"\\n position=\\"top-right\\"\\n autoClose={3000}\\n newestOnTop\\n closeOnClick\\n pauseOnHover\\n className=\\"toast-container\\"\\n />\\n <style jsx global>{`\\n .Toastify__toast {\\n border-radius: 0.5rem;\\n font-size: 1rem;\\n font-weight: 400;\\n color: #fff;\\n padding: 0.75rem 1rem;\\n min-height: 48px;\\n display: flex;\\n align-items: center;\\n box-shadow: 0 4px 12px rgba(0, 0, 0, 0.15);\\n }\\n .Toastify__toast-body {\\n margin: 0;\\n padding: 0;\\n flex: 1;\\n line-height: 1.4;\\n }\\n .Toastify__toast--success {\\n background: #2f855a;\\n }\\n .Toastify__toast--error {\\n background: #c53030;\\n }\\n .Toastify__toast--loading {\\n background: #4a5568;\\n }\\n .Toastify__toast-icon {\\n margin-right: 0.5rem;\\n }\\n `}</style>\\n</div>\\n );\\n}\\nexport default ImageUploader;\\n\\n
In the ImageUploader
component above, the toast.loading
method displays an Uploading your image… notification when a file is selected.
Upon completion, toast.update
dynamically updates the toast to either a success message (Image uploaded successfully! 🖼️) with a green background or an error message (Upload failed, try again!) with a red background, using type: success
or type: error
:
The ariaLabel props
enhance accessibility, and the ToastContainer
is configured with options like position=\\"top-right\\"
, autoClose={3000}
, and pauseOnHover
to ensure the toasts are user-friendly
This section is focused on responding correctly to Google’s People Also Ask (PAA) questions about React-Toastify v11.
\\nTo install React-Toastify, run:
\\nnpm i react-toastify\\n\\n
Go ahead to import ToastContainer
and toast
from \'react-toastify\'
, as seen in the example below:
import { ToastContainer, toast } from \'react-toastify\';\\n\\nfunction App() {\\n return (\\n <div>\\n <button onClick={() => toast(\\"Hello!\\")}>Notify</button>\\n <ToastContainer />\\n </div>\\n );\\n}\\n\\n
The example above sets up a simple toast notification system.
\\nChange the position of toasts by setting the position
prop on <ToastContainer />
to values like \\"top-right\\"
, \\"bottom-left\\"
, or \\"top-center\\"
.
For individual toasts, pass position
to the toast()
function, e.g., toast(\\"Hi!\\", { position: \\"bottom-right\\" })
.
For animations, import transitions (Bounce
, Slide
, Zoom
, Flip
) from \'react-toastify\'
and set the transition
prop on <ToastContainer />
or per toast, as seen in the code example below:
import { ToastContainer, toast, Slide } from \'react-toastify\';\\n\\nfunction App() {\\n return (\\n <div>\\n <button onClick={() => toast(\\"Hi!\\", { position: \\"bottom-left\\", transition: Slide })}>\\n Notify\\n </button>\\n <ToastContainer position=\\"top-center\\" transition={Zoom} />\\n </div>\\n );\\n}\\n\\n
This places toasts at the top-center with a Zoom animation by default, or bottom-left with a slide for the specific toast.
\\nThe top best use cases for React-Toastify are form submissions, user actions, and any form of alerts.
\\nTo update Recat-Toastify to the latest version (e.g., v11.0.5), run:
\\nnpm install react-toastify@latest \\n\\n//in your project\'s terminal. \\n\\n
Make sure you check your current version in package.json
under \\"dependencies\\"
. After updating, verify the compatibility, as v11 will require your React project to be React 18+.
Test your app to ensure toasts render correctly. v11 simplified the DOM and removed some props like enableMultiContainer
. If issues arise, consult the official changelog for breaking changes:
npm install react-toastify@latest\\n\\n
I will advise you to always back up your project before updating to avoid disruptions.
\\nIn this tutorial, we learned how to style toast messages using React-Toastify. We also explored how to style custom toast notifications to suit our preferences, and how to use the useNotificationCenter
Hook to create a cool notification center where we can display all our toast notifications. We also looked at real-world examples and saw the new improvements that come with version 11.
React-Toastify is a useful React toast library because it is highly customizable and provides many toast variants. Other tools are available if you need even more functionality, such as implementing animated toasts in React.
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nThis article illustrates building an ETL pipeline entirely in TypeScript to extract weather data from the OpenWeatherMap API and COVID-19 statistics from a GitHub CSV, transform them into a unified structure, and load the results into a PostgreSQL database using Prisma. The process uses TypeScript’s static typing and async/await syntax for clearer API interactions and error handling, and it automates the workflow using node‑cron.
\\nETL, or Extract, Transform, Load, is a data processing pattern where information is collected from external sources, transformed into a consistent structure, and stored in a database for further use or analysis. An ETL pipeline automates this flow, making sure data is ingested, transformed, and persisted in a repeatable way.
\\nTypeScript’s design enforces type safety from the start of this process, significantly reducing runtime errors that may occur when data from external sources doesn’t meet expectations.
\\nProjects that combine multiple APIs and file formats benefit from compile‑time checks, ensuring that every piece of data adheres to the defined structure. This approach minimizes debugging surprises that often plague dynamically typed Python applications and results in a maintainable codebase where refactoring is safe and predictable.
\\nThis section guides you through creating a new Node.js project and setting up TypeScript for building your ETL pipeline.
\\nThe project structure is as follows:
\\nproject/\\n├── prisma/\\n│ └── schema.prisma\\n├── src/\\n│ ├── extract.ts\\n│ ├── transform.ts\\n│ ├── load.ts\\n│ └── schedule.ts\\n├── package.json\\n└── tsconfig.json\\n\\n
Begin by creating a new Node.js project and configuring TypeScript. Paste the following code in your tsconfig.json
in the project’s root. It might look like this:
{\\n \\"compilerOptions\\": {\\n \\"target\\": \\"ES2019\\",\\n \\"module\\": \\"commonjs\\",\\n \\"strict\\": true,\\n \\"esModuleInterop\\": true,\\n \\"outDir\\": \\"./dist\\"\\n },\\n \\"include\\": [\\"src/**/*\\"]\\n}\\n\\n
With this configuration, initialize your project and install dependencies. Use npm
to install libraries for HTTP requests, scheduling, CSV parsing, and database interaction:
npm init -y\\nnpm install axios node-cron papaparse @prisma/client\\nnpm install --save-dev typescript ts-node @types/node @types/papaparse\\nnpx prisma init\\n\\n
The extraction phase involves calling the OpenWeatherMap API and downloading COVID-19 data hosted on GitHub. In a file named src/extract.ts
, the code below fetches weather data for a specific city and retrieves CSV content for COVID-19 data:
import axios from \'axios\';\\nimport Papa from \'papaparse\';\\nexport interface WeatherResponse {\\n main: { temp: number; humidity: number };\\n weather: { description: string }[];\\n}\\nexport async function fetchWeatherData(city: string, apiKey: string): Promise<WeatherResponse> {\\n const url = `https://api.openweathermap.org/data/2.5/weather?q=${city}&appid=${apiKey}&units=metric`;\\n const response = await axios.get<WeatherResponse>(url);\\n return response.data;\\n}\\nexport async function fetchCovidData(): Promise<any[]> {\\n const csvUrl = \'https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_daily_reports/11-13-2022.csv\';\\n const response = await axios.get(csvUrl, { responseType: \'text\' });\\n const parsed = Papa.parse<{ Country_Region: string; Last_Update: string }>(response.data, {\\n header: true,\\n skipEmptyLines: true, // Ignores blank lines\\n dynamicTyping: true, // Converts numbers correctly\\n });\\n if (parsed.errors.length) {\\n console.warn(\'CSV parsing errors:\', parsed.errors); // Logs errors instead of throwing immediately\\n }\\n // Filter out completely invalid rows (rows missing essential fields)\\n return parsed.data.filter(row => row.Country_Region && row.Last_Update) as any[];\\n}\\n\\n
This code fetches weather data from the OpenWeatherMap API and COVID‑19 CSV data from GitHub using Axios, then parses and filters the CSV data with papaparse for structured, type-safe results. It defines a WeatherResponse interface to ensure the weather data adheres to expected types while gracefully handling CSV parsing errors and filtering out incomplete rows.
\\nTypeScript’s async/await model allows developers to write asynchronous API calls in a linear, easy-to-read style, reducing callback nesting and simplifying error handling. In contrast, Python’s requests
model is synchronous by default, requiring additional frameworks or workarounds for async behavior, which can complicate code and reduce clarity.
Transformation enforces a consistent schema before loading data into the database. In src/transform.ts
, data from the weather API and COVID-19 CSV are normalized.
The following example aggregates COVID-19 data for the United Kingdom and merges it with weather information:
\\nexport interface TransformedData {\\n city: string;\\n temperature: number;\\n humidity: number;\\n weatherDescription: string;\\n confirmedCases: number;\\n deaths: number;\\n recovered: number;\\n activeCases: number;\\n }\\n\\n export function transformDataMultiple(\\n weather: { main: { temp: number; humidity: number }; weather: { description: string }[]; name?: string },\\n covidData: any[]\\n ): TransformedData[] {\\n const ukData = covidData.filter((row) => row[\'Country_Region\'] === \'United Kingdom\');\\n if (!ukData.length) {\\n throw new Error(\'No COVID-19 data found for United Kingdom.\');\\n }\\n\\n return ukData.map((row) => ({\\n city: weather.name || \'London\',\\n temperature: weather.main.temp,\\n humidity: weather.main.humidity,\\n weatherDescription: weather.weather.length > 0 ? weather.weather[0].description : \'Unknown\',\\n confirmedCases: parseInt(row[\'Confirmed\'], 10) || 0,\\n deaths: parseInt(row[\'Deaths\'], 10) || 0,\\n recovered: parseInt(row[\'Recovered\'], 10) || 0,\\n activeCases: parseInt(row[\'Active\'], 10) || 0\\n }));\\n }\\n\\n
Using TypeScript interfaces and runtime validations ensures that if the API data changes unexpectedly, the error is caught early during transformation rather than after data insertion into the database.
\\nThe final stage uses Prisma to perform type-safe database operations. Define your schema in prisma/schema.prisma
:
datasource db {\\n provider = \\"postgresql\\"\\n url = env(\\"DATABASE_URL\\")\\n}\\n\\ngenerator client {\\n provider = \\"prisma-client-js\\"\\n}\\n\\nmodel Record {\\n id Int @id @default(autoincrement())\\n city String\\n temperature Float\\n humidity Float\\n weatherDescription String\\n confirmedCases Int\\n deaths Int\\n recovered Int\\n activeCases Int\\n}\\n\\n
This Prisma schema defines a PostgreSQL data source using an environment variable for the connection URL, configures a JavaScript client generator, and declares a Record
model with fields for location, weather data (temperature, humidity, description), and COVID-19 case statistics (confirmed, deaths, recovered, active), using an auto-incrementing integer as the primary key.
Now, generate the Prisma client with npx prisma generate
, then create a src/load.ts
file with the following code:
import { PrismaClient } from \'@prisma/client\';\\nimport { TransformedData } from \'./transform\';\\nconst prisma = new PrismaClient();\\nexport async function loadData(data: TransformedData[]): Promise<void> {\\n try {\\n for (const record of data) {\\n await prisma.record.create({\\n data: {\\n city: record.city,\\n temperature: record.temperature,\\n humidity: record.humidity,\\n weatherDescription: record.weatherDescription,\\n confirmedCases: record.confirmedCases,\\n deaths: record.deaths,\\n recovered: record.recovered,\\n activeCases: record.activeCases\\n }\\n });\\n }\\n } catch (error) {\\n console.error(\'Error loading data:\', error);\\n throw error;\\n } finally {\\n await prisma.$disconnect();\\n }\\n}\\n\\n
This approach enforces the database schema at both design time and runtime. Prisma’s type-safe queries ensure that only valid data reaches the PostgreSQL database, preventing corruption due to schema mismatches.
\\nOnce the ETL pipeline is set up, you need to test it to ensure it correctly extracts, transforms, and loads the data into your PostgreSQL database. Follow these steps:
\\nEnsure that your PostgreSQL instance is running and accessible. If using Docker, you can start a PostgreSQL container with:
\\ndocker run --name postgres -e POSTGRES_USER=admin -e POSTGRES_PASSWORD=admin -e POSTGRES_DB=etl -p 5432:5432 -d postgres\\n\\n
Set the DATABASE_URL
environment variable in a .env
file. While you are in there, you can also paste in your OpenWeather API key:
DATABASE_URL=\\"postgresql://admin:admin@localhost:5432/etl\\"\\nOPENWEATHER_API_KEY=\\"xxxxxxxxxxxxxxxxxxxxx\\" \\n\\n
Now, run the following command to ensure that the database schema is properly applied:
\\nnpx prisma migrate dev --name init\\n\\n
This will create the necessary tables in your database.
\\n\\nNext, run the extraction, transformation, and loading steps sequentially by creating a script src/index.ts
:
import { fetchWeatherData, fetchCovidData } from \'./extract\';\\nimport { transformData } from \'./transform\';\\nimport { loadData } from \'./load\';\\n\\nconst CITY = \'London\';\\nconst API_KEY = process.env.OPENWEATHER_API_KEY;\\n\\n(async () => {\\n try {\\n if (!API_KEY) throw new Error(\\"Missing OpenWeather API Key\\");\\n\\n console.log(\'Fetching data...\');\\n const weatherData = await fetchWeatherData(CITY, API_KEY);\\n const covidData = await fetchCovidData();\\n\\n console.log(\'Transforming data...\');\\n const transformedData = transformData(weatherData, covidData);\\n\\n console.log(\'Loading data into the database...\');\\n await loadData(transformedData);\\n\\n console.log(\'ETL process completed successfully.\');\\n } catch (error) {\\n console.error(\'ETL process failed:\', error);\\n }\\n})();\\n\\n
This script orchestrates the ETL (Extract, Transform, Load) process by sequentially fetching weather data for London from OpenWeather and COVID-19 statistics from GitHub, transforming them into a unified structure, and loading the processed data into a PostgreSQL database. It ensures the OpenWeather API key is set and logs each stage for visibility.
\\nRun the ETL process with the following command:
\\nnpx tsc\\nnode dist/index.js\\n\\n
After running the script, check your database to verify the data was loaded correctly:
\\ndocker exec -it postgres psql -U admin -d etl -c \\"SELECT * FROM \\\\\\"Record\\\\\\";\\"\\n\\n
This should return the weather and COVID-19 data stored in your PostgreSQL database:
\\nThe final stage involves automating the ETL process. With Python, you would need schedulers like Airflow or Celery. These third-party tools require a separate message broker and introduce additional layers of complexity for distributed task management.
\\nMeanwhile, TypeScript’s scheduling libraries, like node‑cron, integrate directly into the application without extra overhead.
\\nAutomation is handled by node‑cron, which schedules the complete ETL process periodically. In src/schedule.ts
, integrate the extraction, transformation, and loading steps into a scheduled task:
import cron from \'node-cron\';\\nimport { fetchWeatherData, fetchCovidData } from \'./extract\';\\nimport { transformData } from \'./transform\';\\nimport { loadData } from \'./load\';\\n\\nconst OPENWEATHER_API_KEY = process.env.OPENWEATHER_API_KEY || \'\';\\nconst CITY = \'London\';\\n\\ncron.schedule(\'0 * * * *\', async () => {\\n console.log(\'Starting ETL job...\');\\n try {\\n const weather = await fetchWeatherData(CITY, OPENWEATHER_API_KEY);\\n const covidData = await fetchCovidData();\\n const transformed = transformData(weather, covidData);\\n await loadData(transformed);\\n console.log(\'ETL job completed successfully.\');\\n } catch (error) {\\n console.error(\'ETL job failed:\', error);\\n }\\n});\\n\\n
This scheduled task triggers at the start of every hour. The integration of async/await ensures that each step is executed sequentially and that errors are caught and logged in a centralized manner.
\\nAfter setting the DATABASE_URL
and OPENWEATHER_API_KEY
environment variables, compile and run the scheduled ETL process using:
npx ts-node src/schedule.ts\\n\\n
This command starts the ETL process, executing extraction, transformation, and loading on the defined schedule. The console output will indicate the progress and success or failure of each task.
\\nThe combination of static type checking, consistent data transformation, and a unified development environment gives TypeScript a clear advantage over Python for building ETL pipelines. With TypeScript, the entire stack, from API extraction to data loading, remains type-safe, reducing the debugging overhead that Python developers often face with runtime type errors.
\\nModern libraries like Prisma and node‑cron streamline development and deployment, ensuring that the entire pipeline can be built and maintained with fewer surprises. TypeScript’s async/await model also enables clean, linear code that’s easier to maintain compared to Python’s often fragmented approach to asynchronous behavior.
\\nOverall, TypeScript offers a more maintainable and reliable ETL pipeline compared to Python’s approach. Developers gain a unified experience from API integration to database loading while minimizing unexpected runtime issues.
\\nEven though some of the data we used doesn’t change very often and may not be up to date, this is a proof of concept, so feel free to apply it in your own projects.
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nTypeScript is being ported to Go. This is known as “TypeScript 7” (it is currently on 5.8). It’s quite likely that you know this by now, as there have been excellent communications from the TypeScript team. In fact, hats off to the team; it’s been an object lesson in how to communicate well: straightforward, clear, and open.
\\nThere’s no shortage of content detailing what is known about the port. This piece is not that. Rather, it’s the reflections of two people in the TypeScript community:
\\nts-loader
, the webpack loader for TypeScript. In his day job, he works at Investec, a South African bank, and is based in London. The greatest city on earth (in his opinion)It’s going to be a somewhat unstructured wander through our reactions and hopes. Buckle up for opinions, thoughts, and feelings.
\\nI mean, weren’t we happy with each other anyway? Just as we were? Yes, but also no.
\\nIf you’ve been in the JavaScript/TypeScript ecosystem recently, you’ve probably noticed a growing number of tools that support JavaScript development — but are written in other languages. We’ve had esbuild, written in Go. SWC, written in Rust. Bun, written in Zig. Deno, written in Rust.
\\nThe list kept growing. All of these tools brought performance gains, which was — and still is — a wonderful thing. We’ll talk more about performance later. One hold-out was TypeScript. It continued to be written in TypeScript. While performance improvements did happen — and were an area of focus for the team — they were incremental, not transformative.
\\nYou could sense the impatience in the community, as people started trying to speed up TypeScript by building their own implementations. Most notably, DongYoon Kang, the creator of SWC, implemented the transpilation aspect of TypeScript. He then attempted to build a type checker as well — first in Rust, then in Go, then back to Rust.
\\nThat project didn’t ultimately succeed, but the fact that people were willing to try showed how strong the demand for better performance had become. A successful port seemed inevitable — and if it didn’t come from the TypeScript team, it could have put the ecosystem in a tricky spot. A port to a language other than TypeScript was going to happen eventually. And here we are.
\\nWhat does the Go port meaningfully change about TypeScript? According to Josh Goldberg‘s useful framing, TypeScript is four things:
\\nThe language is unaffected by the port. Syntax is unchanged. You’ll still be writing type
s and interface
s as you were before. There is no difference.
The same applies to the checks that the type checker is performing. The code that was detected as an error before will still fail to type check with TypeScript 7:
\\nconst i: number = \\"not actually a number\\";\\n// ts: Type \'string\' is not assignable to type \'number\'\\n
This is where the differences begin. The type checker, compiler, and language services do change. They become an order of magnitude faster.
\\nPut your hand up if you don’t care about performance. That’s right — no hands went up. We all care about performance. Whenever you have to work with technology that lags and breaks you out of your flow, you notice it. It’s basically all you notice.
\\nThe TypeScript team has always cared about performance, particularly in the area of development tooling. TypeScript co-creator Anders Hejlsberg has mentioned in interviews the need for language servers to provide fast feedback as people work — something measured in milliseconds, not seconds.
\\nWhat are the implications of these changes to the TypeScript ecosystem? Put simply, a faster VS Code and faster builds.
\\nAt Investec, where John works, there are many engineers who use VS Code and spend part of their engineering life writing TypeScript and JavaScript. All those engineers will benefit from a snappier development experience. When they open up a project in VS Code, the time it takes for the language service to wake up will drop dramatically. As they refactor their code, the experience will be faster. The “time to red squiggly line” metric will decrease. That’s a good thing.
\\nAs a consequence, engineers should be incrementally more effective, given that there are fewer pauses in their workflow.
\\n\\nThe same incremental gain applies to builds. As our engineers build applications, they run TypeScript builds on their machines and in a continuous integration context. These will all be faster than they were before. We’ll continually experience a performance improvement, which is a benefit.
\\nThis, of course, is not Investec-specific but a general improvement that everyone will benefit from. Across the world, wherever anyone writes and builds TypeScript, they will do so faster.
\\nMany languages have bootstrapping compilers, meaning the compiler is written in the same language it compiles. TypeScript has followed this model since it was first open sourced. That’s about to change: the compiler will no longer be written in TypeScript, but in Go. This may be the first example of a language intentionally moving away from a bootstrapping compiler — and it’s all in the name of performance.
\\nOf all the aspects of the Go port, this, according to John, is the most anxiety-inducing. The TypeScript team will be writing less TypeScript in their day-to-day work. They won’t stop using it, of course, but they’ll certainly be writing more Go and less TypeScript. One implication of this is reduced dogfooding, meaning less direct feedback from the people building the language about what it’s like to use it.
\\nThat said, given how broad and active the TypeScript community is, this may not be as concerning as it first seems. The team is highly connected to the community and, even if they’re writing less TypeScript themselves, there are plenty of others who will continue to provide feedback. It’s also worth remembering that the TypeScript team has often written the language in ways that don’t necessarily reflect how most developers use it. For example, they’ve historically relied heavily on classes (which we’ll talk about more below) and, until recently, modules. Before Jake Bailey’s monumental effort to migrate the TypeScript codebase to use modules, it was still using namespaces. That didn’t prevent the team from continuing to improve support for modern JavaScript features.
\\nAnother potential concern is whether the TypeScript team might become less involved in TC39, the committee responsible for evolving the JavaScript language. The TypeScript team has been instrumental in shaping JavaScript over the years, contributing to features like optional chaining, decorators, and more. As they shift to writing more Go, some have wondered whether their influence on JavaScript might diminish.
\\nAshley, who is one of Bloomberg’s TC39 delegates, isn’t worried. Daniel Rosenwasser, the Principal Product Manager for TypeScript, recently became one of TC39’s two incoming facilitators. Ron Buckton, another delegate from the TypeScript team, continues to champion several exciting proposals, such as Explicit Resource Management. The TypeScript team’s input remains just as important, regardless of what language the compiler is written in.
\\nThere are four primary ways to interact with the TypeScript package:
\\ntsc
import ts from \\"typescript
tsserver
Let’s contemplate how these might change.
\\ntsc
There will still be a CLI and it sounds like the goal will be very close compatibility. The implementation may change to be Go, but you would still be able to interact with the CLI in the same way.
\\nimport ts from \\"typescript
The TypeScript team are still working on this part. There will still be a JavaScript API, though it’s almost certain that there will be changes here. Exactly how different they are is not yet known. One core question is whether the currently synchronous API will need to become asynchronous due to calling Go, as this can be a difficult change to migrate to. The good news is that it looks like it will be able to retain a synchronous API.
\\ntsserver
Editors such as VSCode, and even linters, can interact with TypeScript via its language server. Interestingly, even though TypeScript helped inspire the LSP specification, currently, it doesn’t actually implement it. The TypeScript team is using the port as an opportunity to align with the LSP specification, which is a positive change.
\\nTools use one or a combination of the above to use TypeScript on their user’s behalf. There will be work for the tools, but this might be done transparently to the end developer.
\\nLet’s drill further into tools that use TypeScript internally. There will be an impact.
\\nJohn is the maintainer of ts-loader
, a widely used webpack loader for TypeScript. This loader depends upon TypeScript APIs that have been unchanged in years.
In fact, John went so far as to comment on Bluesky in early March:
\\n— only to have the TypeScript team effectively come out and say “hold my beer”.
\\nIt’s very early days, but we know for sure that the internal APIs of TypeScript (that ts-loader
depends upon) will change. To quote Daniel Rosenwasser of the TypeScript team:
\\nWhile we are porting most of the existing TypeScript compiler and language service, that does not necessarily mean that all APIs will be ported over.
ts-loader
has two modes of operation:
It’s very unlikely that TypeScript 7 will work with ts-loader
’s type checking mode, without significant refactoring. However, it’s quite likely that ts-loader
might be able to support transpilation-only mode with minimal changes. This mode only really depends on the transpileModule
API of TypeScript. If the transpileModule
API lands, then the transpilation-only mode of ts-loader
should just work. On the other hand, this might be the natural end of the road for the type checking mode of ts-loader
.
Ashley is the author of ts-blank-space
, an open-source TypeScript-to-JavaScript transform published by Bloomberg that avoids the need for source maps. It also depends on TypeScript’s API, so it may be affected by the port. It’s too early to say, but the change here may turn into an opportunity.
A not-uncommon request of ts-blank-space
is to investigate using a different parser. This is because while ts-blank-space
itself is very small and only uses TypeScript’s parsing API, this is not an isolated part of TypeScript, so it still ends up importing the whole type checker. For projects that already depend on TypeScript, there is no added cost, but it makes ts-blank-space
less appealing for use cases that are not already importing TypeScript as a library.
Some tooling will have a natural path forward. For instance, typescript-eslint
will continue onwards with TypeScript 7. The TypeScript team are planning to help with typed linting with the new, faster APIs. So this means that ESLint, which many people are used to using, will become faster as TypeScript becomes faster.
However, it’s likely tooling that depends upon internal TypeScript APIs, which are going to radically change, may cease to exist in their current forms. This will vary project by project, but expect change. And this is fine — change is a constant.
\\nOnce it became clear that TypeScript would no longer be written in TypeScript, people naturally had strong opinions about what language should take its place. Fans of C# wished the team had picked C#, especially given Anders’ involvement with the language. Rust enthusiasts hoped for Rust. The good news for those folks is there’s still a chance Rust will play a role: the Node.js bindings for TypeScript 7 may use a Rust-based package.
\\nIf John had been asked to guess the replacement language ahead of time, he would’ve said Rust or maybe Zig (which Bun is built with). Go felt like a bit of a leftfield pick — but in hindsight, it makes total sense. ESBuild is written in Go, so there’s a successful precedent. Go has a garbage collector (unlike Rust), which makes porting the codebase significantly easier. Meanwhile, C# leans heavily on classes, whereas the TypeScript compiler makes only light use of them, so porting to C# would have been an uphill climb.
\\nThe choice of Go reflects pragmatism, which has always been at the heart of TypeScript’s ethos. In fact, if you look at TypeScript’s official design goals, you’ll see it again and again. Perhaps most famously, soundness is listed as a “non-goal.” Instead, TypeScript aims to strike a balance between correctness and productivity.
\\nBottom line: pragmatism is the TypeScript way, and Go is a pragmatic choice.
\\nThis is evidence that JavaScript can be a slow language in which to implement a type checker. To borrow a line from Anders’ “Why Go?” post:
\\n\\nNo single language is perfect for every task.
Type checking is an intensive task. One way to think about it is that the type checker emulates the execution of a program, line by line, and detecting when that emulation breaks a rule. The larger the program, the more work there is to do.
\\nWhen the type checker is written in a dynamic language, it requires another program to run it. In TypeScript’s case, we essentially have a JavaScript engine running the TypeScript checker, which is running an emulation of another program. It’s no surprise that running the checker natively would be noticeably faster.
\\nSwitching to Go brings a reported 10× speedup. That gain is roughly split: around 3.5× from running natively, and the rest from better parallelization:
\\n<span data-mce-type=\\"bookmark\\" style=\\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\\" class=\\"mce_SELRES_start\\">\ufeff</span>
\\nGiven how much more work it takes to execute code in a dynamic language versus a precompiled native binary, if anything, it’s actually impressive that the difference isn’t even greater. That speaks to the extraordinary performance work that’s gone into V8, the JavaScript engine used by Node.js.
\\nIt is possible to write JavaScript programs that work in parallel today, but doing so efficiently often requires low-level APIs like SharedArrayBuffer
, which force you to work with raw bytes. There’s a Stage 2 proposal to introduce Shared Structs to JavaScript. If that moves forward, it could make it easier to take advantage of multiple cores in JavaScript applications.
Of course, JavaScript still has many strengths. For example:
\\nThe ecosystem demanded a faster TypeScript. Performance cannot be ignored these days. As a consequence, some kind of port of TypeScript was bound to happen. If we accept that view, then what next? Well, the way that the TypeScript team has started executing on the migration fills us with confidence. The TypeScript team are talented, they are pragmatists and their choices are wise.
\\nThis is going to Go well.
\\nThanks to Jake Bailey, of the TypeScript team, for reviewing this piece – greatly appreciated! Also to Josh Goldberg for writing up his classification of what makes up TypeScript; many thanks!
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nHere is a quick summary of the adoption and popularity of the React chart libraries we’ll discuss below:
\\nLibrary | \\nStars | \\nDownloads | \\nRendering | \\nBacked by | \\n
---|---|---|---|---|
Recharts | \\n24.8k+ | \\n3.6m+ | \\nSVG | \\nOpen source community | \\n
react-chartjs-2 | \\n6.8k+ | \\n1.6m+ | \\nCanvas | \\nOpen source community | \\n
Victory | \\n11.1k | \\n272k+ | \\nSVG | \\nFormidable Labs | \\n
Nivo | \\n13.5k+ | \\n665k+ | \\nSVG, Canvas, HTML | \\nOpen source community | \\n
React ApexCharts | \\n1.3k+ | \\n550k+ | \\nSVG | \\nOpen source community | \\n
Ant Design Charts | \\n2k+ | \\n61k+ | \\nCanvas | \\nAnt Design Team | \\n
Apache ECharts | \\n62.2k+ | \\n1.1m+ | \\nSVG, Canvas | \\nApache | \\n
visx | \\n19.9k+ | \\n2.2m+ | \\nSVG | \\nAirbnb | \\n
MUI X Charts | \\n4.7k | \\n349K+ | \\nSVG | \\nMUI team | \\n
Editor’s note: This article was last updated by Carlos Mucuho in April 2025 to include emerging libraries such as React ApexCharts, Ant Design Charts, and MUI X Charts, share developer community insights on trending libraries, and address some common React chart library FAQs.
\\nChart libraries are designed to ease the process of building charts and other data visualizations. When working on small projects, it’s often simple enough to create charts from scratch. However, if you’re working on a project that requires you to display data of different types, it can make more sense to use a chart library.
\\nIn today’s React ecosystem, there are many libraries designed to help you create interactive, responsive, and even animated charts. In the next sections, we’ll compare the top React chart libraries in 2025, evaluating them for criteria including features, documentation, community adoption, and customizability.
\\nDevelopers on Reddit and X express diverse opinions on React chart libraries, reflecting varying project needs and preferences.
\\nWith over 24.8K stars on GitHub, Recharts is a redefined chart library built with D3 and React. One of the most popular charting libraries for React, Recharts has excellent documentation that is easy to understand, as well as great project maintainers.
\\nRecharts follows React’s component principle by enabling users to build charts with its reusable React components. It provides beautiful charts out of the box that can be customized by tweaking the existing component’s props or adding custom ones.
\\nRecharts has drawing support for SVGs only and does not provide support for mobile. The charts are not responsive by default, but can be made responsive by using the ResponsiveContainer
wrapper component.
Recharts has been around for a while, so it has a large developer community. You can easily get started with this chart library by using its CDN or installing it with either npm or Yarn:
\\nnpm install recharts\\nOR\\nyarn add recharts\\n\\n
CDN:
\\n<script src=\\"https://unpkg.com/react/umd/react.production.min.js\\"></script>\\n<script src=\\"https://unpkg.com/react-dom/umd/react-dom.production.min.js\\"></script>\\n<script src=\\"https://unpkg.com/recharts/umd/Recharts.min.js\\"></script>\\n\\n
If you’ve used Chart.js in React, you should experience no learning curve when using react-chartjs-2. react-chartjs-2 is a React wrapper for the popular JavaScript Chart.js library. Many features of Chart.js can be used in react-chartjs-2.
\\nreact-chartjs-2 has drawing support for Canvas only and renders on the client-side. At the time of writing, it has more than 6.8K stars on GitHub.
\\nreact-chartjs-2 supports animation, and most of the charts it offers are responsive by default. The library provides some components for various types of chart styles out of the box and also allows for customization.
\\nAlthough react-chartjs-2 does not have detailed documentation of its own, its website shows the different chart types and how to get started with them. Additionally, Chart.js has detailed, easy-to-understand documentation.
\\nThis library performs well across all modern browsers and also has a large community of users and great maintainers. It can be installed using npm or Yarn:
\\nnpm i react-chartjs-2 chart.js\\nOR\\nyarn add react-chartjs-2 chart.js\\n\\n
According to its official documentation, Victory is “an opinionated, but fully overridable, ecosystem of composable React components for building interactive data visualizations.”
\\nLike many other React chart libraries on the list, Victory was built with React and D3. It comes with a wide variety of charts out of the box that are fully customizable.
\\nVictory has robust, detailed documentation, which makes the library beginner-friendly and easy to get started with. It features drawing support for SVG and high-quality animations that can be customized (at least to some extent). Victory also offers responsive charts that work well across screen sizes, and it supports chart component animations.
\\nThe library has 11.1K stars on GitHub at the time of writing and is maintained by the developers at Nearform, formerly Formidable Labs.
\\nA major advantage of using Victory is that it can also be used to build iOS and Android applications. This is because Victory has a version for React Native that uses an almost identical API to the web version.
\\nVictory can be installed using npm or Yarn:
\\nnpm install victory\\nOR\\nyarn add victory\\n\\n
Nivo, like many other React chart libraries, was built with React and D3 and provides a variety of chart types and designs to choose from. The library offers HTML, Canvas, and SVG charts, provides support for client and server-side rendering, and works well with animations.
\\nNivo comes with a wide range of beautiful charts that can be customized if needed, without much difficulty. Many of the charts Nivo provides are responsive by default, so they fit well across various screen sizes. Nivo also supports motion and transitions, which are powered by React Motion.
\\nAt the time of writing, Nivo has 13.5K GitHub stars. It boasts a thriving community and engaged maintainers and has a beautiful website with detailed documentation that makes it easy to get started. Nivo can be installed using npm or Yarn:
\\nnpm install @nivo/core @nivo/bar --legacy-peer-deps\\n\\n
To use Nivo, install the @nivo/core
package first, then select the appropriate scoped @nivo
packages based on the charts you want to use. The command above installs the package required for using the bar chart. You have to use the --legacy-peer-deps
flag to force the installation while ignoring peer dependency conflicts. At the moment, Nivo does not yet support the latest React version.
React ApexCharts is a React wrapper for ApexCharts, a modern JavaScript charting library that helps developers create interactive visualizations for web pages. With over 1.3K stars on GitHub, it has become increasingly popular among React developers who need sophisticated charting capabilities.
\\nReact ApexCharts offers a wide range of chart types, including line, area, bar, pie, donut, scatter, bubble, heatmap, and radial bar charts. One of its standout features is the ability to create mixed charts that combine different chart types within a single visualization.
\\nThe library provides robust interactive features such as zooming, panning, and scrolling with excellent animation support. All charts are responsive by default, making them ideal for projects that need to work across various devices and screen sizes. React ApexCharts also offers support for real-time data updates, which is particularly useful for dashboards and monitoring applications.
\\nReact ApexCharts renders using SVG, which allows for better quality graphics that scale well. The library also provides extensive customization options through a comprehensive API, allowing developers to modify everything from colors and fonts to tooltips and legends.
\\nDocumentation for React ApexCharts is well-structured and includes numerous examples, making it easy for developers to get started. The library is actively maintained, with regular updates and improvements.
\\nYou can install React ApexCharts using npm:
\\nnpm install react-apexcharts apexcharts\\n\\n
Ant Design Charts is a charting library developed by the team behind the popular Ant Design UI framework. It integrates seamlessly with other Ant Design components, making it an excellent choice for developers already using the Ant Design ecosystem.
\\nThe library offers a comprehensive range of chart types, including conventional options like line, bar, and pie charts and more specialized visualizations such as funnel charts, radar charts, and gauge charts. Ant Design Charts also provides support for statistical charts like box plots and waterfall charts.
\\n\\nOne of the key advantages of Ant Design Charts is its focus on user experience and accessibility. The charts come with built-in features like tooltips, legends, and responsive layouts that adapt to different screen sizes. The library also provides robust theming capabilities, allowing developers to customize the look and feel of charts to match their application’s design.
\\nAnt Design Charts handles large datasets efficiently through Canvas rendering, which results in better performance compared to SVG-based alternatives when dealing with complex visualizations.
\\nThe documentation for Ant Design Charts is comprehensive, though some sections may only be available in Chinese. However, the code examples are clear enough that most developers can understand how to implement the charts even with limited documentation in English.
\\nAnt Design Charts can be installed using npm or Yarn:
\\nnpm install @ant-design/charts\\nOR\\nyarn add @ant-design/charts\\n\\n
Apache ECharts is a charting library built by Apache. Having been built on top of ZRender, a lightweight Canvas library, it provides both SVG and Canvas support.
\\nBesides the usual chart types, ECharts also provides a few unique chart types like Sankey diagrams, graphs, and heatmaps. Along with multiple data visualization types, ECharts also provides a wide range of customization options and has support for themes and extensions. It also supports animation and is responsive by default.
\\nMany of the charts in ECharts are optimized for mobile interaction, like zooming and panning the coordinate system with your fingers on small screens.
\\nIts extensive customization options and support for themes and extensions make ECharts a great choice for developers who want to create beautiful, informative charts with detailed data visualizations.
\\n\\nECharts can be installed using npm or Yarn:
\\nnpm install echarts\\nOR\\nyarn add echarts\\n\\n
visx is a collection of reusable data visualization components built by Airbnb. It is built on top of D3 and provides a wide range of chart types and supports both SVG and Canvas.
\\nIts minimalistic design makes visx aesthetically pleasing. The API is also super customizable and allows you to build your own charting library on top of it.
\\nvisx also has a strong focus on performance and keeps bundle sizes small. It works well with CSS-in-JS libraries like styled-components and Emotion.
\\nvisx can be installed using npm:
\\nnpm i @visx/group @visx/shape @visx/scale --legacy-peer-deps\\n\\n
Since visx is a collection of components, you will need to select the appropriate @visx packages based on the charts you want to create. At the moment, some of the @visx packages do not yet support the latest React version. You have to use the --legacy-peer-deps
flag to force the installation while ignoring peer dependency conflicts.
MUI X Charts is a charting library built by the Material UI (MUI) team, designed to seamlessly integrate with the popular React UI framework. It leverages the power of the MUI ecosystem, providing a cohesive and aesthetically pleasing charting experience.
\\nMUI X Charts offers a variety of chart types, including line, bar, scatter, and pie charts, with a strong emphasis on customization and accessibility. It utilizes SVG rendering for efficient performance.
\\nOne of the key advantages of MUI X Charts is its tight integration with the MUI theme and styling system. This allows developers to easily customize the look and feel of their charts to match their application’s design. The library also provides comprehensive documentation and numerous examples, making it easy to get started.
\\nMUI X Charts is actively maintained by the MUI team, ensuring regular updates and improvements. It benefits from the strong community and support of the broader MUI ecosystem.
\\nMUI X Charts can be installed using npm or Yarn:
\\nnpm install @mui/x-charts\\nor\\nyarn add @mui/x-charts\\n\\n
The Charts package has a peer dependency on @mui/material
. If you are not already using it in your project, you can install it with:
npm install @mui/material @emotion/react @emotion/styled\\nor\\nyarn add @mui/material @emotion/react @emotion/styled\\n\\n
For large datasets, Canvas or WebGL rendering typically outperforms SVG. Apache ECharts handles tens of thousands of data points efficiently. React-chartjs-2 offers a good balance between usability and performance.
\\nRegardless of library choice, implementing data aggregation or windowing techniques can significantly improve chart responsiveness.
\\nConsider your project’s specific needs:
\\nreact-chartjs-2 and React ApexCharts handle streaming data well, while Apache ECharts offers specialized components for dynamic data. For optimal performance with real-time visualization, implement throttling/debouncing, use windowing techniques, choose libraries with efficient update methods, and consider Canvas over SVG for higher update frequencies.
\\nVisualizing large datasets can be quite challenging, leading to slow performance and high memory usage, which may even crash the browser. Also, unnecessary rerendering often occurs with state management libraries like Redux, Redux Toolkit, etc., while handling asynchronous updates. Ensuring the responsiveness of chart components on mobile is another common challenge.
\\nThis section will address these pain points:
\\nPerformance can be improved by throttling and debouncing the data streams to reduce the rate at which the chart is rerendered. Another way to improve performance is by integrating chart libraries that leverage WebGL or Canvas.
\\nYou can avoid unnecessary re-renders by using the useMemo
or useCallback
React Hooks to memoize chart data so that the chart component only updates when necessary. Complex data transformation logic can also lead to redundant re-renders when it’s handled in the store. Ensure that it’s handled in a selector or, better still, the component rendering the chart.
Most of these chart libraries rely on D3.js for the fine-tuning of touch gestures on mobile screens. Some also provide responsive classes or props for mobile responsiveness. Chart libraries like Recharts and react-chartjs-2 allow you to specify dynamic dimensions that fit into the parent container in a fluid manner.
\\nThere are more charting libraries available for React than we can cover in a single article, but the few libraries described above are among the most widely adopted and beloved in the React community.
\\nWhen deciding on a chart library to use for your React project, remember that they were all created to help developers achieve a particular end result. Compare their functions and what they offer before deciding which is best for your project. Some chart libraries might be ideal for smaller projects, while others are better suited to more complex projects.
\\nUltimately, the choice of what React chart library to use depends on your project requirements and what types of features you prefer to work with.
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nRetrieval-augmented generation (RAG) techniques enhance large language models (LLMs) by integrating external knowledge sources, improving their performance in tasks requiring up-to-date or specialized information. In this article, we will explore six RAG types:
\\nRemember that the list is not exhaustive because this is a hot topic in both the scientific community and among practitioners. As you’re reading this, somewhere, a researcher is working on a new, clever way of integrating new documental resources with an LLM.
\\nFor frontend developers, this context is increasingly important. AI-powered features (chatbots, search assistants, etc.) are becoming more integrated into modern web apps. Understanding how RAG types function helps frontend teams collaborate more effectively with backend engineers, design better user interactions, and build smarter interfaces that take advantage of real-time, contextual data from LLMs.
\\nLet’s quickly define each of these six common RAG types:
\\nEach of these approaches offers unique advantages depending on the application’s specific requirements, such as the need for real-time information, structured knowledge integration, or efficiency considerations. To better understand how these approaches are related, I have prepared a diagram to categorize them:
\\nNow that we have a high-level overview, let’s review each of the six RAG types a little closer.
\\nRAG is an umbrella expression that defines the process of supplementing the generation of an LLM with additional information retrieved and added to the prompt before starting the generation. The first step in setting up a RAG system is defining the additional information source:
\\nIn the schema, some additional data is split into chunks, and such chunks are added or indexed in a database. Chunking splits the additional data into smaller pieces with the intent of cleverly adding them in a prompt to expand the knowledge of the LLM right before the generation.
\\nA common use case is, for example, a company that sells mechanical components and deploys a chatbot to assist its customers. The chunking phase will extract chunks from the technical datasheets for each element, and these chunks will be added to the database.
\\nHow the database is structured is a complex topic. Typically, specific database engines that are especially aware of the kind of data they will handle are used:
\\nWhen the user chats with the LLM, the query is sent to the database to select the chunks that are relevant to the topic and are added to the prompt to personalize the answer.
\\nHere is an example of the process: we are developing a chatbot specialized in Paris to provide tourist information; the (fictional) chunks are
\\nIf the user asks, “Who designed the Eiffel Tower, and when was it completed?”, only Chunk 2 is selected and added to the prompt sent to the LLM.
\\nIn the simplest version of RAG, the database is part of a very specific family. This vector database is optimized for storing and retrieving data in the form of vectors, which are numerical representations of information, such as text or images.
\\nThe vector database stores embeddings, which are high-dimensional vectors that, to some extent, capture the meaning of the text.
\\nWhen a query is made, the vector database searches for the most semantically similar vectors (representing relevant chunks of information) and retrieves them for the model to generate accurate, context-aware responses. This allows for efficient and scalable handling of large datasets.
\\nThere are two vector database possibilities for RAG: Pinecone is a vector database delivered as a PaaS, while FAISS (from Meta) is a library that can be used on-premises.
\\nGraph Retrieval-Augmented Generation (Graph-RAG) is an evolution of RAG. The difference is that it uses a graph database instead of a vector database for retrieval.
\\nGraph-RAG is suitable for organizing and retrieving information by leveraging relationships and connections between different chunks in a graph structure.
\\n\\nThis is particularly efficient for handling information that is naturally graph-based. For example, a collection of scientific articles on a given topic that reference each other in their bibliographies:
\\nThe key difference lies in the retrieval mechanism. Vector databases focus on semantic similarity by comparing numerical embeddings, while graph databases emphasize relations between entities.
\\nTwo solutions for graph databases are Neptune from Amazon and Neo4j. In a case where you need a solution that can accommodate both vector and graph, Weaviate fits the bill.
\\nKnowledge-Augmented Generation (KAG) is a specific application of Graph-RAG where both the nodes (information chunks) and the edges (relationships between them) carry semantic meaning.
\\nUnlike in standard Graph-RAG, where connections may simply denote associations, in KAG, these relationships have defined semantics, such as “causes,” “is a type of,” or “depends on.”
\\nThis adds a layer of meaning to the graph structure, enabling the system to retrieve not only relevant information but also understand how these pieces of information are logically or causally linked.
\\nSemantic edges are important because they enhance the model’s ability to generate contextually rich and accurate responses. KAG can provide deeper insights and more coherent answers by considering the specific nature of the relationships between chunks.
\\nFor example, in a medical context, understanding that “symptom A causes condition B” allows the model to generate more informed responses than simply retrieving related information without understanding the nature of the connection. This makes KAG particularly valuable in complex domains where relationships between pieces of knowledge are crucial:
\\nThe platforms for the graph database listed above let you select chunks matching a given query and the chunks linked by specific edges. This allows you to implement a KAG architecture.
\\nRecent advances in long-context LLMs have extended their ability to process and reason over substantial textual inputs. By accommodating more tokens, these models can assimilate extensive information in a single prompt. This makes them well-suited for tasks like document comprehension, multi-turn dialogue, and summarization of lengthy texts.
\\nIn CAG, this new possibility is leveraged by using a key-value repository that, like a cache for a microprocessor, works as a rapid access mechanism to information to be preloaded in the prompt, instead of relying on a vector or graph database like above:
\\nThe figure shows how the key-value repository can be used efficiently throughout the interactions with the user to augment the prompt iteratively. Each answer generated by the LLM is returned to the user and added to the key-value repository within the prompt.
\\nRemember to keep the number of tokens in the prompt below the limit for the LLM you are using. To do so, periodically clean the key-value repository of older content. It is worth noting that the CAG approach can be plugged into any of the other solutions described here to accelerate retrieval as long as the interactive session with the user evolves.
\\nZero-Indexing Internet Search-Augmented Generation aims to enhance the generation’s performance by dynamically integrating the latest online information using standard search engine APIs like Google or more specialized ones.
\\nIt tries to circumvent the limitations of RAG systems that rely on a static, pre-indexed corpus. Zero-indexing refers to the absence of a pre-built index, allowing the system to access real-time data directly from the Internet:
\\nThis method involves two main components:
\\nA practical use case for this approach is generating responses that require up-to-date information, such as news updates, market trends, or recent scientific discoveries, ensuring the content is timely and relevant.
\\nCorrective Retrieval Augmented Generation (CRAG) is a method designed to enhance the robustness of RAG. CRAG introduces an evaluator component that assesses the quality of retrieved chunks and assigns a confidence score to them before feeding them to the prompt. Based on this score, they are labeled as correct, incorrect, or ambiguous:
\\nFor correct retrievals, the refiner component refines the documents to extract key information. For incorrect retrievals, web searches are used to find better sources, similar to the zero-indexing approach above. Ambiguous cases combine both approaches. This method helps ensure that the generated content is accurate and relevant, even when initial retrievals are suboptimal.
\\nCRAG can be seamlessly integrated into existing RAG frameworks to improve their performance across various tasks.
\\nIn the article, we discussed the basic RAG approach to augment the generation of an LLM plus five different approaches to expand the RAG functionalities.
\\nThe first two, the Graph Retrieval-Augmented Generation and the Knowledge-Augmented Generation, are successive refinements to better understand the underlying structure of the information and documents used to augment the generation. To accelerate the retrieval Cache-Augmented Generation can be used in combination with other approaches.
\\nThe Zero-Indexing Internet Search-Augmented Generation approach and Corrective Retrieval-Augmented Generation approach aim at keeping the documents used for the retrieval up-to-date by extracting information from online sources and API and to increase the reliability of the generation.
\\nThe exact combination of these approaches is a matter of experience and knowledge: experience because you must know your audience and how it will interact with the LLM, and knowledge because you must grasp the inner structure of the knowledge you intend to use to augment the generation.
\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nTLDR: Although still in beta, Lynx.js is a promising new JavaScript framework for building responsive, cross-platform apps with native performance, React compatibility, and multithreaded rendering. This tutorial walks through building a simple meal planner app using Lynx.js from setup to deployment.
\\nUsers are always looking for applications that can deliver seamless user experiences across all platforms. Keeping within UI and UX guidelines when working across different platforms can be challenging; each one has different design guidelines, screen resolutions, and user interactions that need to be taken into account. Because of this, some cross-platform frameworks are forced to compromise on performance or native aesthetics.
\\nThis is where the Lynx JavaScript library excels. Released in March 2025, Lynx.js is a family of technologies that have native rendering capabilities, ensuring that components have a look and feel that is native for both iOS and Android. It also has a multithreaded engine that can separate UI rendering from the business logic, which helps applications run much smoother and with better overall responsive interfaces.
\\nLynx.js is also JavaScript and React-based, with similarities to React Native, so developers won’t need to learn a new language to use it.
\\nIn this article, we will explore how to use Lynx.js to build cross-platform mobile applications that are robust and responsive, and then we’ll put all of our knowledge to use by building a simple meal-planning mobile application.
\\nResponsive design means designing and developing applications that deliver an optimal user experience across varying devices and screen sizes. In the case of Lynx.js, this means the creation of apps that look identical on Android, iOS, and the web and also perform correctly on different screen sizes and orientations. Lynx.js has complete support for current web technologies like Flexbox and CSS Grid, so it’s simple to build layouts that adjust dynamically to altering environments.
\\nLynx.js supports Flexbox, which is a popular flexible layout system that is useful for arranging items in rows or columns and managing space dynamically. This makes it very good at creating responsive designs with minimal effort, as you can see in this code snippet:
\\n/* Example: A simple Flexbox layout */\\n.container {\\n display: flex;\\n justify-content: center;\\n align-items: center;\\n}\\n\\n
For more details on Flexbox, check out the Lynx Flexbox guide.
\\nLynx.js also supports CSS Grid, which is a two-dimensional layout tool that is ideal for building structured, grid-based designs that can adjust to various screen sizes. A basic code example is shown below:
\\n/* Example: A basic CSS Grid layout */\\n.grid-container {\\n display: grid;\\n grid-template-columns: 1fr 100px;\\n gap: 10px;\\n}\\n\\n
Similarly, for more information on CSS Grid, check out the Lynx CSS Grid guide documentation.
\\nLynx.js takes cross-platform and responsive mobile app design to the next level with web-based technology, which is well optimized alongside native rendering features. The native rendering ensures that user interface components are native, as if they were built specifically for Android, iOS, or the web. With the multithreaded engine, the UI rendering is separated from the business logic, which results in smoother rendering performance.
\\nAnother bonus feature worth talking about is that platform-specific modifications in Lynx.js are provided with APIs that can detect the current platform, enabling developers to adapt styles, like Android material design or iOS guidelines, in one codebase. It also simplifies touch events and gestures for natural, intuitive user interactions across devices.
\\nSome other Lynx.js enhancements include:
\\nCreating mobile apps that work flawlessly on Android, iOS, and the web is a basic requirement for all modern applications. The two have differences in navigation and visual styling; Android uses a physical back button and follows material design, while iOS uses swipe gestures and follows human interface guidelines.
\\nWeb platforms, on the other hand, rely more on browser functionality to impact their performance and responsiveness. These differences cause inconsistencies, which is why tools like Lynx.js aim to bridge the gap by incorporating platform detection APIs with optimized rendering pipelines to enable a consistent look and feel based on each platform’s capabilities.
\\nTo tackle these kinds of common challenges while also maintaining consistency, developers have to adopt best practices that are suited for cross-platform development with Lynx.js. These include:
\\nNow, let’s learn how to set up a development environment using Lynx.js. It is worth mentioning that we need to have a Mac because we will be using an iOS simulator, which is only available on Macs.
\\nCheck out the Quick Start documentation for an in-depth tutorial on how to set up Lynx.js. I will give a quick run-through of the process and mention some important things about using the iOS simulator.
\\nCreating a Lynx.js project is very simple because it uses Rspeedy, which is an Rspack-based Lynx build tool. The first step is to run one of the following commands to create a project depending on which package manager you use:
\\n# npm\\nnpm create rspeedy@latest\\n\\n# yarn\\nyarn create rspeedy\\n\\n# pnpm\\npnpm create rspeedy@latest\\n\\n# bun\\nbun create rspeedy@latest\\n\\n
That’s all it takes to set up a Lynx.js project!
\\nNext, you have to download LynxExplorer. There are two versions, so choose the one that relates to your operating system:
\\nAfter you have downloaded a version, extract the zip file, and you should have a file like this: LynxExplorer-arm64
. Now, open the Simulator file on your Mac. You can do this in a couple of ways:
With the Simulator up and running, just drag the LynxExplorer-x86_64.app
into it, and the app will install on the Simulator. You might also want to install the Lynx DevTool for debugging the application and codebase.
Now you should see the app installed like this example:
\\nClicking on the icon will open the Lynx.js app home screen:
\\nWe can run our app by going to the project folder, for example, rspeedy-project
, and then running the usual command to start our app. If you use a package manager other than npm
, use the appropriate command:
# npm\\nnpm run dev\\n\\n# yarn\\nyarn run dev\\n\\n# pnpm\\npnpm run dev\\n\\n# bun\\nbun run dev\\n\\n
Now, you should see a barcode and a web URL that looks similar to this:
\\nhttp://111.111.11.11:3004/main.lynx.bundle?fullscreen=true\\n\\n
Copy and paste the URL that you have generated into the Lynx Explorer app on your iOS simulator and use it as the Card URL. Then click the Go button. Now, you will see the home screen for your project:
\\nOpen the codebase in your code editor and make updates and changes like any other project, and you will see the changes in real time.
\\n\\nIf, for any reason, you have problems with your application making updates, there are a few things you can do to resolve them. There is a piece of code that does hot reloading, i.e., real-time changes. You can find it here:
\\nif (import.meta.webpackHot) {\\n import.meta.webpackHot.accept();\\n}\\n\\n
This is how to enable Hot Module Replacement (HMR) in a JavaScript module when using webpack. You can also restart the emulator by going to the Simulator and then Device > Restart.
\\nAlternatively, you can close and re-run the app by opening the Simulator, accessing the App Switcher using the same URL, and swiping the app off the screen like you would on a physical iOS device.
\\nIf all else fails, then delete the app from the Simulator by long pressing on it and then removing the app. Then, you can repeat the process and drag and drop that LynxExplorer-x86_64.app
back onto the Simulator to install it again.
Currently, Lynx.js lacks support for CSS media queries, which are heavily used in responsive layouts that dynamically change based on screen dimensions and orientation. At the moment, developers have to use Flexbox and CSS Grid to make their applications responsive.
\\nThis can be achieved by programmatically resizing styles and layouts with JavaScript based on viewport size and orientation data, allowing the user interface to respond dynamically. By detecting and responding to screen size and orientation changes in code, developers can build adaptive, user-friendly apps that deliver an optimal experience across a wide range of devices.
\\nCheck out the Lynx Blog to see their roadmap for development and future feature requests in this area.
\\nWhen building cross-platform mobile applications, it’s important to deal with platform differences to ensure a seamless user experience on Android, iOS, and the web. Designs need to adapt to mobile interactions like taps and swipes (with a touch target size of at least 44×44 pixels) as compared to mouse input on the web.
\\nLayout modifications need to be done for varying status bar heights; iOS will have taller bars due to notches, and web interfaces are browser-dependent. Safe areas need to be respected in all ecosystems. Navigation behaviors also differ, with Android offering a back button, iOS using swipe gestures, and the web using browser controls, which need to be implemented in customized forms, so refining these factors with device testing enables developers to deliver an intuitive, consistent experience across platforms.
\\nLayout adjustments are necessary to account for differences in status bar heights — for example, iOS devices often have taller bars due to notches, while web interfaces vary based on the browser. Safe areas must be respected across all ecosystems. Navigation behaviors also differ: Android uses a back button, iOS uses swipe gestures, and web apps rely on browser controls. These patterns often require custom implementations, so thorough device testing is essential for enabling developers to deliver a consistent, intuitive experience across platforms.
\\nPerformance optimization that allows for smooth, fluid experiences on different mobile devices and browsers is important because devices need to be able to support different responsive designs. Optimized CSS properties like transform
and opacity
, utilized for animation and preventing expensive layout recalculations, can be used by developers to minimize reflows and repaint requirements when trying to maintain smooth UI interactions.
Additionally, efficient resource utilization is critical, especially on devices with low bandwidth and on processing-capable mobile devices. Lazy loading images postpones non-critical resource loading until needed, reducing initial loads.
\\nAs we mentioned earlier, Lynx.js has a multi-threaded engine, and native rendering enables these techniques to be even more effective, maintaining interfaces that are responsive and fast on all platforms.
\\nTo guarantee that your Lynx.js app delivers a unified and responsive experience on Android, iOS, and the web, you must test and debug it thoroughly. Since physical devices aren’t always possible, emulators and simulators can be good alternatives.
\\nThe Android Emulator included in Android Studio allows you to emulate varying Android devices and screen sizes. Xcode’s Simulator offers a range of iPhone and iPad models that can be used to test an app’s versatility.
\\nIn testing web applications, web browser debug tools like Chrome DevTools or Firefox Developer Tools are also useful. They give developers a way to test screen sizes, orientations, and even networks to toggle between mobile and desktop mode or test items for layout issues.
\\nLynx DevTool is another debugging tool, this one specifically meant for debugging Lynx.js applications. It offers features like element tree inspection, console log viewing, and JavaScript debugging. It is designed mainly to work in collaboration with Lynx.js’ built-in rendering engine, which enables you to diagnose framework-specific issues that other tools are not able to identify. For more details, refer to the Lynx DevTool documentation.
\\nIf you want to make sure that apps are responsive, here are some common debugging techniques worth taking into account:
\\nIn these types of situations, a systematic approach is usually recommended. Start with the smallest screen and gradually test bigger screens, checking for layout, usability, and performance issues along the way. Using emulators, simulators, browser developer tools, and the Lynx DevTool can make it easier to ensure that a Lynx application is responsive and high-performance on all the devices you are running it on.
\\nBefore we begin, remember that this tutorial works best if you have macOS because we need to use the iOS simulator application for testing, which is only available on Mac. You should have gone through the Setting Up Your Development Environment section prior to this section, so hopefully, you are already set up and good to go.
\\nLet’s build a simple meal planner application using Lynx.js to implement what we’ve learned so far. Find a location on your computer where you want to create this project, and then create a folder named meal-planner-app
. Then, cd
into the folder and run the following command to set up our Lynx.js project:
npm create rspeedy@latest\\n\\n
This command will set up our Lynx.js project. These are the settings I chose:
\\nAfter this part of the setup is complete, run this command to create the files and folders needed for our project codebase:
\\ncd rspeedy-project\\nnpm install\\nnpm install react-router@6\\ncd src\\nmkdir -p data screens styles types\\ntouch data/mealDatabase.ts && touch screens/HomeScreen.tsx && touch screens/SearchScreen.tsx\\ntouch styles/HomeScreen.css && touch styles/SearchScreen.css\\ntouch types/lynx.d.ts\\n\\n
Our project should now have all of the main folders and files required for our application, with the exception of the images, which you can download from the assets
folder of this GitHub repository.
Download and copy all the images from this repo into your local assets
folder for this project.
\\nNow, we can start on the codebase, which shouldn’t take too long because it’s a simple application with only six files to work on.
But first, let’s take a look at what our application looks like:
\\nThis is what our home screen will look like. As you can see, it displays the meals we have chosen with an image and their respective calories. There is also a delete button to remove a meal. At the top of the screen is an add button, which takes us to the search screen, where we can search and add meals:
\\nThis is the search screen that allows us to search for different meals.
\\nAll of the meals are hardcoded, and in a production application, the meals would likely come from a database or server. On this screen, you can scroll and search for meals by typing in some letters based on their names. When you click on the add (+) button, you are automatically redirected back to the home screen, where you can view the meal you added. You can also use the back button to navigate back to the home screen. The navigation uses the React Router library.
\\nAnd that’s the basic functionality of this simple meal planner app. Now, let’s build it!
\\nThe first file to work on will be the mealDatabase.ts
file, which requires this code:
import avocadoToast from \'../assets/avocado-toast.jpg\';\\nimport capreseSalad from \'../assets/caprese-salad.jpg\';\\nimport chickenBurritoBowl from \'../assets/chicken-burrito-bowl.jpg\';\\nimport eggBreakfastSandwich from \'../assets/egg-breakfast-sandwich.jpg\';\\nimport greekYogurt from \'../assets/greek-yogurt.jpg\';\\nimport grilledChickenSalad from \'../assets/grilled-chicken-salad.jpg\';\\nimport lentilSoup from \'../assets/lentil-soup.jpg\';\\nimport mediterraneanHummusPlate from \'../assets/mediterranean-hummus-plate.jpg\';\\nimport mushroomRisotto from \'../assets/mushroom-risotto.jpg\';\\nimport oatmealWithFruit from \'../assets/oatmeal-with-fruit.jpg\';\\nimport peanutButterBananaToast from \'../assets/peanut-butter-banana-toast.jpg\';\\nimport proteinSmoothie from \'../assets/protein-smoothie.jpg\';\\nimport quinoaBowl from \'../assets/quinoa-bowl.jpg\';\\nimport salmonWithRice from \'../assets/salmon-with-rice.jpg\';\\nimport steakWithVegetables from \'../assets/steak-with-vegetables.jpg\';\\nimport tofuVegetableCurry from \'../assets/tofu-vegetable-curry.jpg\';\\nimport tunaSandwich from \'../assets/tuna-sandwich.jpg\';\\nimport turkeyWrap from \'../assets/turkey-wrap.jpg\';\\nimport vegetableStirFry from \'../assets/vegetable-stir-fry.jpg\';\\nimport veggiePasta from \'../assets/veggie-pasta.jpg\';\\n\\nexport interface Meal {\\n id: string;\\n name: string;\\n calories: number;\\n image: string;\\n}\\n\\n// Create a mock meal database with 20 food items\\nexport const mealDatabase: Meal[] = [\\n {\\n id: \'1\',\\n name: \'Grilled Chicken Salad\',\\n calories: 350,\\n image: grilledChickenSalad,\\n },\\n {\\n id: \'2\',\\n name: \'Veggie Pasta\',\\n calories: 450,\\n image: veggiePasta,\\n },\\n {\\n id: \'3\',\\n name: \'Salmon with Rice\',\\n calories: 520,\\n image: salmonWithRice,\\n },\\n {\\n id: \'4\',\\n name: \'Avocado Toast\',\\n calories: 280,\\n image: avocadoToast,\\n },\\n {\\n id: \'5\',\\n name: \'Protein Smoothie\',\\n calories: 320,\\n image: proteinSmoothie,\\n },\\n {\\n id: \'6\',\\n name: \'Quinoa Bowl\',\\n calories: 420,\\n image: quinoaBowl,\\n },\\n {\\n id: \'7\',\\n name: \'Greek Yogurt with Berries\',\\n calories: 180,\\n image: greekYogurt,\\n },\\n {\\n id: \'8\',\\n name: \'Steak with Vegetables\',\\n calories: 550,\\n image: steakWithVegetables,\\n },\\n {\\n id: \'9\',\\n name: \'Vegetable Stir Fry\',\\n calories: 380,\\n image: vegetableStirFry,\\n },\\n {\\n id: \'10\',\\n name: \'Tuna Sandwich\',\\n calories: 420,\\n image: tunaSandwich,\\n },\\n {\\n id: \'11\',\\n name: \'Chicken Burrito Bowl\',\\n calories: 650,\\n image: chickenBurritoBowl,\\n },\\n {\\n id: \'12\',\\n name: \'Mushroom Risotto\',\\n calories: 480,\\n image: mushroomRisotto,\\n },\\n {\\n id: \'13\',\\n name: \'Egg Breakfast Sandwich\',\\n calories: 390,\\n image: eggBreakfastSandwich,\\n },\\n {\\n id: \'14\',\\n name: \'Lentil Soup\',\\n calories: 250,\\n image: lentilSoup,\\n },\\n {\\n id: \'15\',\\n name: \'Caprese Salad\',\\n calories: 310,\\n image: capreseSalad,\\n },\\n {\\n id: \'16\',\\n name: \'Turkey Wrap\',\\n calories: 430,\\n image: turkeyWrap,\\n },\\n {\\n id: \'17\',\\n name: \'Oatmeal with Fruit\',\\n calories: 290,\\n image: oatmealWithFruit,\\n },\\n {\\n id: \'18\',\\n name: \'Tofu Vegetable Curry\',\\n calories: 410,\\n image: tofuVegetableCurry,\\n },\\n {\\n id: \'19\',\\n name: \'Mediterranean Hummus Plate\',\\n calories: 360,\\n image: mediterraneanHummusPlate,\\n },\\n {\\n id: \'20\',\\n name: \'Peanut Butter Banana Toast\',\\n calories: 340,\\n image: peanutButterBananaToast,\\n },\\n];\\n\\n
This file is quite easy to understand. It is essentially our mock meal database, which contains 20 food items. As I mentioned earlier, if this were a production application, all of this data would be in a database or on a server, but for our quick example, it’s easier to just hardcode it. All of the images should be in the assets
folder.
Next, we are going to work on the HomeScreen.tsx
file, so copy and paste the code into this file:
import React from \'react\';\\nimport \'../styles/HomeScreen.css\';\\nimport type { Meal } from \'../data/mealDatabase.js\';\\n\\ninterface HomeScreenProps {\\n navigateTo: (screen: string) => void;\\n meals: Meal[];\\n removeMeal: (id: string) => void;\\n}\\n\\nexport function HomeScreen({ navigateTo, meals, removeMeal }: HomeScreenProps) {\\n const navigateToSearch = () => {\\n navigateTo(\'search\');\\n };\\n\\n return (\\n <view\\n className=\\"home-screen\\"\\n style={{\\n width: \'100%\',\\n height: \'100%\',\\n paddingBottom: \'80px\',\\n boxSizing: \'border-box\',\\n }}\\n >\\n <view\\n className=\\"header\\"\\n style={{ position: \'sticky\', top: 0, zIndex: 10 }}\\n >\\n <text className=\\"title\\">My Meals</text>\\n <view className=\\"add-button\\" bindtap={navigateToSearch}>\\n <text>+</text>\\n </view>\\n </view>\\n\\n {meals.length === 0 ? (\\n <view className=\\"empty-state\\">\\n <text>No meals added yet. Tap + to add a meal.</text>\\n </view>\\n ) : (\\n <scroll-view\\n className=\\"meals-list\\"\\n style={{\\n width: \'100%\',\\n height: \'calc(100% - 60px)\',\\n paddingLeft: \'10px\',\\n paddingRight: \'10px\',\\n }}\\n scroll-orientation=\\"vertical\\"\\n >\\n {meals.map((meal) => (\\n <view key={meal.id} className=\\"meal-item\\">\\n <image src={meal.image} className=\\"meal-image\\" />\\n <view className=\\"meal-details\\">\\n <text className=\\"meal-name\\">{meal.name}</text>\\n <text className=\\"meal-calories\\">{meal.calories} calories</text>\\n </view>\\n <view\\n className=\\"remove-button\\"\\n bindtap={() => removeMeal(meal.id)}\\n >\\n <text>×</text>\\n </view>\\n </view>\\n ))}\\n </scroll-view>\\n )}\\n </view>\\n );\\n}\\n\\n
Our homepage is created from this file, and this is where all of our meals will be displayed once we have added them. There is some basic CRUD (create, read, update, delete) functionality, as we can delete meals after they have been added. In a more advanced version of this app, it could be possible to create and update meals, but that is beyond the scope of this tutorial.
\\nNow for our SearchScreen.tsx
file, which gets this upcoming code:
import { useCallback, useState } from \'@lynx-js/react\';\\nimport React from \'react\';\\nimport \'../styles/SearchScreen.css\';\\nimport { mealDatabase } from \'../data/mealDatabase.js\';\\nimport type { Meal } from \'../data/mealDatabase.js\';\\n\\ninterface SearchScreenProps {\\n navigateTo: (screen: string) => void;\\n addMeal: (meal: Meal) => void;\\n}\\n\\nexport function SearchScreen({ navigateTo, addMeal }: SearchScreenProps) {\\n const [searchTerm, setSearchTerm] = useState(\'\');\\n const [searchResults, setSearchResults] = useState<Meal[]>([]);\\n\\n const handleSearch = useCallback((value: string) => {\\n setSearchTerm(value);\\n if (value.trim() === \'\') {\\n setSearchResults([]);\\n return;\\n }\\n\\n const results = mealDatabase.filter((meal: Meal) =>\\n meal.name.toLowerCase().includes(value.toLowerCase())\\n );\\n setSearchResults(results);\\n }, []);\\n\\n const handleBack = () => {\\n navigateTo(\'home\');\\n };\\n\\n return (\\n <view\\n className=\\"search-screen\\"\\n style={{\\n width: \'100%\',\\n height: \'100%\',\\n paddingBottom: \'80px\',\\n boxSizing: \'border-box\',\\n }}\\n >\\n <view\\n className=\\"header\\"\\n style={{ position: \'sticky\', top: 0, zIndex: 10 }}\\n >\\n <view className=\\"back-button\\" bindtap={handleBack}>\\n <text>←</text>\\n </view>\\n <text className=\\"title\\">Find Meals</text>\\n </view>\\n\\n <view\\n className=\\"search-container\\"\\n style={{ position: \'sticky\', top: \'60px\', zIndex: 9 }}\\n >\\n <input\\n className=\\"search-input\\"\\n type=\\"text\\"\\n placeholder=\\"Search for meals...\\"\\n value={searchTerm}\\n bindinput={(e: any) => handleSearch(e.detail.value)}\\n />\\n </view>\\n\\n <scroll-view\\n className=\\"search-results\\"\\n style={{\\n width: \'100%\',\\n height: \'calc(100% - 120px)\',\\n paddingLeft: \'10px\',\\n paddingRight: \'10px\',\\n }}\\n scroll-orientation=\\"vertical\\"\\n >\\n {searchResults.length === 0 && searchTerm !== \'\' ? (\\n <view className=\\"no-results\\">\\n <text>No meals found. Try a different search term.</text>\\n </view>\\n ) : (\\n searchResults.map((meal) => (\\n <view\\n key={meal.id}\\n className=\\"meal-item\\"\\n bindtap={() => addMeal(meal)}\\n >\\n <image src={meal.image} className=\\"meal-image\\" />\\n <view className=\\"meal-details\\">\\n <text className=\\"meal-name\\">{meal.name}</text>\\n <text className=\\"meal-calories\\">{meal.calories} calories</text>\\n </view>\\n <view className=\\"add-icon\\">\\n <text>+</text>\\n </view>\\n </view>\\n ))\\n )}\\n </scroll-view>\\n </view>\\n );\\n}\\n\\n
This screen gives us the ability to search our hardcoded mealDatabase.js
file and then add our meals to the home screen. The search box makes it fairly easy to find the meals we want because we can search by letters and names.
Now, we have to work on the styling for our two screens, so let’s start with HomeScreen.css
and add this code to the file:
.home-screen {\\n display: flex;\\n flex-direction: column;\\n height: 100%;\\n background-color: #f5f5f5;\\n position: relative;\\n padding-top: env(safe-area-inset-top);\\n padding-bottom: env(safe-area-inset-bottom);\\n}\\n\\n.header {\\n display: flex;\\n justify-content: space-between;\\n align-items: center;\\n padding: 16px;\\n background-color: #4a90e2;\\n height: 60px;\\n box-sizing: border-box;\\n}\\n\\n.title {\\n color: white;\\n font-size: 20px;\\n font-weight: bold;\\n}\\n\\n.add-button {\\n width: 36px;\\n height: 36px;\\n border-radius: 18px;\\n background-color: white;\\n display: flex;\\n justify-content: center;\\n align-items: center;\\n}\\n\\n.add-button text {\\n color: #4a90e2;\\n font-size: 24px;\\n font-weight: bold;\\n}\\n\\n.empty-state {\\n flex: 1;\\n display: flex;\\n justify-content: center;\\n align-items: center;\\n padding: 20px;\\n}\\n\\n.empty-state text {\\n color: #888;\\n text-align: center;\\n}\\n\\n.meals-list {\\n flex: 1;\\n padding: 10px;\\n}\\n\\n.meal-item {\\n display: flex;\\n align-items: center;\\n margin-bottom: 12px;\\n padding: 12px;\\n background-color: white;\\n border-radius: 8px;\\n box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);\\n}\\n\\n.meal-image {\\n width: 60px;\\n height: 60px;\\n border-radius: 6px;\\n margin-right: 12px;\\n}\\n\\n.meal-details {\\n flex: 1;\\n}\\n\\n.meal-name {\\n font-size: 16px;\\n font-weight: bold;\\n color: #333;\\n margin-bottom: 4px;\\n}\\n\\n.meal-calories {\\n font-size: 14px;\\n color: #666;\\n}\\n\\n.remove-button {\\n width: 30px;\\n height: 30px;\\n border-radius: 15px;\\n background-color: #ff5252;\\n display: flex;\\n justify-content: center;\\n align-items: center;\\n min-width: 30px;\\n}\\n\\n.remove-button text {\\n color: white;\\n font-size: 18px;\\n font-weight: bold;\\n}\\n\\n
It’s pretty straightforward; these styles are used to style all of the components on this screen.
\\nLet’s finish the styling by adding the CSS for the SearchScreen.css
file next:
.search-screen {\\n display: flex;\\n flex-direction: column;\\n height: 100%;\\n background-color: #f5f5f5;\\n position: relative;\\n padding-top: env(safe-area-inset-top);\\n padding-bottom: env(safe-area-inset-bottom);\\n}\\n\\n.header {\\n display: flex;\\n align-items: center;\\n padding: 16px;\\n background-color: #4a90e2;\\n height: 60px;\\n box-sizing: border-box;\\n}\\n\\n.back-button {\\n width: 36px;\\n height: 36px;\\n border-radius: 18px;\\n background-color: white;\\n display: flex;\\n justify-content: center;\\n align-items: center;\\n margin-right: 12px;\\n min-width: 36px;\\n}\\n\\n.back-button text {\\n color: #4a90e2;\\n font-size: 20px;\\n font-weight: bold;\\n}\\n\\n.title {\\n color: white;\\n font-size: 20px;\\n font-weight: bold;\\n}\\n\\n.search-container {\\n padding: 12px;\\n background-color: white;\\n box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);\\n height: 60px;\\n box-sizing: border-box;\\n}\\n\\n.search-input {\\n width: 100%;\\n height: 36px;\\n border-radius: 18px;\\n border: 1px solid #ddd;\\n padding: 0 16px;\\n font-size: 16px;\\n background-color: #f9f9f9;\\n}\\n\\n.search-results {\\n flex: 1;\\n padding: 10px;\\n}\\n\\n.no-results {\\n padding: 20px;\\n display: flex;\\n justify-content: center;\\n align-items: center;\\n}\\n\\n.no-results text {\\n color: #888;\\n text-align: center;\\n}\\n\\n.meal-item {\\n display: flex;\\n align-items: center;\\n margin-bottom: 12px;\\n padding: 12px;\\n background-color: white;\\n border-radius: 8px;\\n box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);\\n}\\n\\n.meal-image {\\n width: 60px;\\n height: 60px;\\n border-radius: 6px;\\n margin-right: 12px;\\n}\\n\\n.meal-details {\\n flex: 1;\\n}\\n\\n.meal-name {\\n font-size: 16px;\\n font-weight: bold;\\n color: #333;\\n margin-bottom: 4px;\\n}\\n\\n.meal-calories {\\n font-size: 14px;\\n color: #666;\\n}\\n\\n.add-icon {\\n width: 30px;\\n height: 30px;\\n border-radius: 15px;\\n background-color: #4caf50;\\n display: flex;\\n justify-content: center;\\n align-items: center;\\n min-width: 30px;\\n}\\n\\n.add-icon text {\\n color: white;\\n font-size: 18px;\\n font-weight: bold;\\n}\\n\\n
And that takes care of the user interface design for both of our screens! Only two files remain, and then we can run our application.
\\nOur first file will be lynx.d.ts
, so add this code to the file:
declare namespace JSX {\\n interface IntrinsicElements {\\n view: any;\\n \'scroll-view\': any;\\n input: any;\\n image: any;\\n text: any;\\n }\\n}\\n\\n
In this file, we have our TypeScript type definitions for Lynx.js components.
\\nOK, good. Just one final file to complete, and our project will be ready. Replace all of the code in the App.tsx
file with this code:
import { useState } from \'@lynx-js/react\';\\nimport type { Meal } from \'./data/mealDatabase.js\';\\nimport { mealDatabase } from \'./data/mealDatabase.js\';\\nimport { HomeScreen } from \'./screens/HomeScreen.js\';\\nimport { SearchScreen } from \'./screens/SearchScreen.js\';\\n\\n// Use the first two meals from the database as default meals\\nconst defaultMeals: Meal[] = [mealDatabase[0], mealDatabase[1]];\\n\\nexport function App() {\\n const [currentScreen, setCurrentScreen] = useState(\'home\');\\n const [savedMeals, setSavedMeals] = useState<Meal[]>(defaultMeals);\\n\\n const navigateTo = (screen: string) => {\\n setCurrentScreen(screen);\\n };\\n\\n const addMeal = (meal: Meal) => {\\n if (!savedMeals.some((savedMeal) => savedMeal.id === meal.id)) {\\n setSavedMeals([...savedMeals, meal]);\\n }\\n setCurrentScreen(\'home\');\\n };\\n\\n const removeMeal = (id: string) => {\\n setSavedMeals(savedMeals.filter((meal) => meal.id !== id));\\n };\\n\\n return (\\n <view style={{ width: \'100%\', height: \'100%\', overflow: \'hidden\' }}>\\n {currentScreen === \'home\' ? (\\n <HomeScreen\\n navigateTo={navigateTo}\\n meals={savedMeals}\\n removeMeal={removeMeal}\\n />\\n ) : (\\n <SearchScreen navigateTo={navigateTo} addMeal={addMeal} />\\n )}\\n </view>\\n );\\n}\\n\\n
Our App.tsx
file serves as the main component for our Lynx.js application and is set up to have the navigation between our home and search screens. It also holds our application state and passes down the functions to the child components, which can change the state.
Great — our project codebase is complete! Just run the command below to start the mobile application and copy and paste the web URL into the Lynx iOS simulator app:
\\nnpm run dev\\n\\n
Building cross-platform mobile applications with a responsive and consistent user interface doesn’t have to be a time-consuming experience. With Lynx.js, developers have a tool that can provide native rendering, a multithreaded engine, and a flexible UI design philosophy.
\\nWhile still in beta at the time of this article’s writing, Lynx.js has all the hallmarks of a true cross-platform contender, competing alongside Flutter for mobile app development as well.
\\nHopefully, this article encourages you to learn more about Lynx.js and start building your own fun and innovative mobile applications!
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nWhen working with customers, businesses use reports to deliver information. The workflow to deliver these reports can potentially be difficult. There are several tools and software packages that can be used to achieve reporting, but using them for a streamlined workflow can be difficult.
\\nIn this post, I’ll discuss a method that I used in my professional role as tech lead of a software team. I’ll walk through a high-level process for delivering PDFs with React and .NET, and then show an example implementation.
\\nThen, we’ll discuss the solution my team developed using React and .NET, and close with alternatives and offerings from the various cloud providers. Feel free to follow along with my sample project on GitHub.
\\nBefore diving into implementations, it helps to understand a PDF workflow through an example.
\\nLet’s consider a vehicle repair shop. Customers go to this company to get their vehicles serviced for things like car body damage or engine repair. As part of this company’s normal workflow, they’ll need to show the customer a report of the vehicle’s current status, and then another report after the repairs.
\\nObviously, the vehicle status reports could be printouts. The company could just manually print the report from their software system of choice and then hand-deliver it to the customer.
\\nWhat if the customer is unable to stay while the repairs are being done? Or what if the customer wants to have an electronic copy of the report saved for later review? Creating a PDF is an easy solution that provides more options for the customer and could potentially be easier for the business as well.
\\nIn order to use a PDF report, the company would need to build something to:
\\nSince this is all an example, we can assume the company would leverage a service like Twilio for text messaging and SendGrid for email to deliver the report to customers. One could also consider sending the reports directly to Google Drive or some other shared location.
\\nA very simple workflow would be:
\\nObviously this is just using broad strokes, but the point is to consider a workflow around which to build a technical solution.
\\nYou could consider the following tools for each stage:
\\nLet’s now discuss how you could use QuestPDF to package data into a PDF for output.
\\nIf you follow along with my sample project, you’ll note that I have created all the pieces of this workflow for you to see. This workflow is very similar to the broad strokes workflow that I described above:
\\nIn the folder frontend, there is a React project that demonstrates a web page with a button that calls a Web API. This same type of action could be done with a mobile application, desktop app, or different frontend framework. When run locally, you should see this web form:
\\nThis example project’s frontend takes the inputs in a web form, then sends them as a POST
request and gets a URL where the PDF can be viewed:
const handleSubmit = async (e) => {\\n e.preventDefault();\\n setIsLoading(true);\\n setError(null);\\n setSuccess(false);\\n\\n try {\\n // Replace with your actual API endpoint\\n const response = await axios.post(\'http://localhost:5039/api/Report/create-report\', formData);\\n\\n // Check if we\'re using the displayUrl or the direct url\\n const reportUrl = response.data.displayUrl || response.data.url;\\n\\n // Open the URL in a new tab\\n window.open(reportUrl, \'_blank\');\\n setSuccess(true);\\n\\n // Reset the form\\n setFormData({\\n name: \'\',\\n wheels: \'\',\\n paint: \'\',\\n engine: \'\'\\n });\\n } catch (err) {\\n console.error(\'Error creating report:\', err);\\n setError(\'Failed to create report. Please try again.\');\\n } finally {\\n setIsLoading(false);\\n }\\n };\\n\\n
In the backend folder, I have a .NET web API that receives the web request POST
, gathers data, and generates a PDF with QuestPDF. The controller takes the PDF and uploads it to Blob Storage. It then retrieves a Shared Access Signature (SAS) link, which can be opened in a new tab. The SAS link is returned in the payload of the API:
[HttpPost(\\"create-report\\")]\\npublic async Task<IActionResult> CreateReport([FromBody] ReportRequest request)\\n{\\n try\\n {\\n // 1. Generate PDF using QuestPDF\\n var pdfBytes = GeneratePdf(request);\\n\\n // 2. Upload to Azure Blob Storage\\n var blobName = $\\"report-{Guid.NewGuid()}.pdf\\";\\n var sasUrl = await UploadToBlobStorageAndGetSasUrl(pdfBytes, blobName);\\n\\n // 3. Return the SAS URL\\n return Ok(new { url = sasUrl });\\n }\\n catch (Exception ex)\\n {\\n return StatusCode(500, $\\"Error creating report: {ex.Message}\\");\\n }\\n}\\n\\n
In this example, we’re using QuestPDF to generate the PDF:
\\nprivate byte[] GeneratePdf(ReportRequest request)\\n{\\n // Configure QuestPDF license (Free for personal and small business use)\\n QuestPDF.Settings.License = LicenseType.Community;\\n\\n using (var stream = new MemoryStream())\\n {\\n Document.Create(container =>\\n {\\n container.Page(page =>\\n {\\n page.Size(PageSizes.A4);\\n page.Margin(50);\\n page.Header().Element(ComposeHeader);\\n page.Content().Element(content => ComposeContent(content, request));\\n page.Footer().AlignCenter().Text(text =>\\n {\\n text.CurrentPageNumber();\\n text.Span(\\" / \\");\\n text.TotalPages();\\n });\\n });\\n }).GeneratePdf(stream);\\n\\n return stream.ToArray();\\n }\\n}\\n\\n
Please note that this example is very simple by design. QuestPDF has many more features that can be showcased with PDF output, and I encourage you to review their documentation for more information.
\\nTo upload the file and get the PDF URL, I use a connection string in my app’s config. I then make sure to set the ContentDisposition
header so that when the URL is returned, it can be opened directly in a new tab:
private async Task<string> UploadToBlobStorageAndGetSasUrl(byte[] content, string blobName)\\n {\\n // Get the container client\\n var containerClient = _blobServiceClient.GetBlobContainerClient(_containerName);\\n\\n // Ensure container exists (create if it doesn\'t)\\n await containerClient.CreateIfNotExistsAsync(Azure.Storage.Blobs.Models.PublicAccessType.None);\\n\\n // Get blob client and upload the file with properties to set content disposition\\n var blobClient = containerClient.GetBlobClient(blobName);\\n\\n // Create blob upload options with content disposition to open in browser\\n var options = new Azure.Storage.Blobs.Models.BlobUploadOptions\\n {\\n HttpHeaders = new Azure.Storage.Blobs.Models.BlobHttpHeaders\\n {\\n ContentType = \\"application/pdf\\",\\n ContentDisposition = \\"inline; filename=\\\\\\"report.pdf\\\\\\"\\"\\n }\\n };\\n\\n using (var stream = new MemoryStream(content))\\n {\\n await blobClient.UploadAsync(stream, options);\\n }\\n\\n // Generate a SAS token for the blob that expires in 1 hour\\n var sasBuilder = new BlobSasBuilder\\n {\\n BlobContainerName = _containerName,\\n BlobName = blobName,\\n Resource = \\"b\\", // b for blob\\n ExpiresOn = DateTimeOffset.UtcNow.AddHours(1)\\n };\\n\\n sasBuilder.SetPermissions(BlobSasPermissions.Read);\\n\\n // Get the SAS URI\\n var sasUri = blobClient.GenerateSasUri(sasBuilder);\\n\\n return sasUri.ToString();\\n }\\n}\\n\\n
From the web form, you should have returned a PDF:
\\nIn order to get all of this to work, you’ll need to:
\\nThis example project runs everything locally. In a production environment, you would have something like Azure App Service to host your API. Your frontend would also be hosted or served on something like Azure Static Web Apps.
\\nI know that this example uses a web API, but it could just as easily be done with a serverless option like Azure Functions. You could still get an HTTP URL that calls a serverless function to do the same process.
\\nThe nice part about this implementation is that it generates a PDF file that can be easily sent somewhere. This can accommodate situations where you may want to deliver reports asynchronously, or at least electronically. This also provides many options, as you can easily customize the report in code.
\\nAs I mentioned in the intro, the team I lead in my professional role was faced with a situation similar to the repair shop example I outlined in the above section. We needed a way to generate PDFs with our .NET API and deliver them to customers. The workflow that my team used on our project was as follows:
\\nOur customer goes to our application and navigates to a place where they click a button to get their PDF report. Similar to my earlier example, the button then triggers a web request to a .NET Web API that:
\\nMy team uses Azure. There are similar offerings for this workflow with both AWS and Google Cloud.
\\n\\nSimilarly, even though my project starts with a React frontend, we could just as easily have something with Angular, Vue, or something else entirely. The only important requirement is the ability to make a web request.
\\nA few points of note with this workflow:
\\nOne question that you may also be asking is, do we need to store the PDFs at all?
\\nYou can stream results directly to your browser to open in a new tab. One main reason to use this PDF solution with the Azure Blob Storage is that it allows you to keep a copy of what is generated. The copies could serve as a history or be presented to a customer if needed.
\\nModern browsers also have security restrictions around what can be downloaded. Having a secure URL simplifies the interaction with the browser, as it only has to handle downloading the content with the PDF MIME type. There are several options with streaming downloads directly, and I encourage you to review the article here on downloading files in the browser for more information.
\\nMy team found that this solution made integration much easier. Since everything is server-side, we are able to just move this solution anywhere we want in the customer application. We have also built services and typed properties for our reports, which makes them easier to maintain and debug.
\\nIf this solution had been more client-based (JavaScript), you would have had to go through iterations of deployments. You’d also be forced to deal with various discrepancies between the way different browsers handle downloads.
\\nSome modern browsers add additional security checks when downloading content to a computer. Having this solution fully server-side gives my team the flexibility to own the process without relying on a browser. It also makes it easily portable if we wanted to move the API call to another portion of a website or application.
\\n\\nIn my team’s solution, we were using a .NET Web API. As such, we used a NuGet package to create the PDF for output.
\\nLet’s quickly highlight some other options you could use for PDF generation with .NET. I‘m focusing on the .NET packages, because that’s what my team considered and where our focus was for our solution. Here are some offerings in the .NET world that could also be used:
\\nI spent the most time with QuestPDF and DynamicPDF solutions. Both are solid options and provide great documentation, in my experience. DynamicPDF provides a solid support plan and a large array of services outside of just generating a PDF. QuestPDF provides a simple solution that can easily be integrated into smaller applications as well.
\\n\\nThe above options are all just a sampling of what is potentially out there. If your team is not in the .NET world, you can find a solution in languages like JavaScript, Python, Java, PHP, Ruby, and Go.
\\nIn this article, I presented:
\\nI hope this post has shown a simple solution that can scale and be used for your team’s projects. Every team is different, but hopefully, the core of this workflow can help your team create a solid solution for your reporting needs. Thanks for reading my post!
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nProgressive web apps (PWAs) combine the advantages of web applications with the comfort of native apps, which can run locally and offline. Over the last few years, support for PWAs, in terms of available features, has increased. PWAs have become an attractive alternative to investing the effort to build both web and native versions of an application.
\\nIn this article, we will leverage Rust to build the business logic for a rich PWA, which can then be combined with a JavaScript/TypeScript frontend. Of course, it’s also possible to build the frontend with Rust, such as by using Leptos. But the point of this article is to show that complex business logic, including storage, networking, and security implications, can be built using Rust leveraging WebAssembly (Wasm) because performance safety and robustness are especially relevant in those areas.
\\nBut first, let’s talk a bit about progressive web apps and why they’re an interesting approach.
\\nPWAs combine the strengths of both mobile apps and web apps but only require a single code base, allowing us to leverage the whole range of the rich web ecosystem.
\\nBesides that, since the advent of Wasm, we can now also leverage the ecosystems of other languages that compile to Wasm, such as Rust, which gives us another level of power over what we can do in the browser.
\\nCompared to traditional web apps, PWAs offer native-like features such as push notifications and service workers for caching and offline access. This not only boosts performance but also provides a more native feel.
\\nAnother great benefit of PWAs is that we can, with the same code base, offer a traditional web application in addition to the PWA, which serves as a replacement for a native mobile or desktop application.
\\nUpdate management and independence from app stores are additional benefits, and we can use all the security features of browsers and build on top of them.
\\nAs you can see, PWAs offer a range of powerful concepts we can take advantage of.
\\nIn this tutorial, we will build a contrived simple application that has a storage layer using SurrealDB, a multi-modal database, which works with a file in the cloud or in the browser using IndexedDB, using the same API.
\\nWe’ll also use a Nostr-based networking layer. If you’ve never heard of Nostr, think early Twitter, but semi-decentralized via relays and focused on a simple, open protocol on which any social can be built. It uses WebSockets, transported via relay servers to communicate between users. We won’t explore Nostr in too much detail — we’ll just use it to show that complex networking can easily be achieved using Rust and Wasm in the browser and that this can be used in a PWA.
\\nBesides these layers, we’ll also implement ECIES-based encryption and decryption for messages and images that we upload. On startup of the application, we will generate keys for Nostr and ECIES and store them. If the keys exist on startup, we just use them and don’t generate them anew.
\\nThen, it will be possible to send messages to the Nostr network and to fetch these published messages from the network. We will also encrypt and store every message we send locally and will be able to fetch them.
\\nAdditionally, we will implement a file upload for images, where we’ll encrypt and store the image bytes in IndexedDB via SurrealDB and, when fetched, decrypt and show them in the browser.
\\n\\nOf course, this doesn’t constitute a “real” application, but the goal is to show that based on the layers we build here, we will be able to build a complex, rich application.
\\nBecause of the scope of this article, we won’t be able to build fully coherent features, and we’ll also skip a bit on error handling and automated tests. Instead, we’ll implement some use cases in a basic way to showcase that it’s possible to build just about anything you want for a PWA on this basis.
\\nLet’s start building!
\\nTo follow along, all you need is a reasonably recent Rust installation. 1.85 is the latest one at the time of this writing.
\\nFirst, create a new Rust project:
\\ncargo new rust-pwa-example\\ncd rust-pwa-example\\n\\n
Next, edit the Cargo.toml
file and add the dependencies you’ll need:
[dependencies]\\n\\n[lib]\\ncrate-type = [\\"cdylib\\"]\\n\\n[dependencies]\\nserde = \\"1.0\\"\\nserde-wasm-bindgen = \\"0.6\\"\\necies = { version = \\"0.2\\", default-features = false, features = [\\"pure\\"] }\\nhex = \\"0.4\\"\\nsurrealdb = { version = \\"2.2\\", features = [\\"kv-indxdb\\"] }\\ngetrandom = { version = \\"0.3\\", features = [\\"wasm_js\\"] }\\nwasm-bindgen = \\"0.2\\"\\nwasm-bindgen-futures = \\"0.4\\"\\nnostr-sdk = \\"0.39\\"\\nconsole_error_panic_hook = \\"0.1\\"\\nconsole_log = { version = \\"1.0\\", features = [\\"color\\"] }\\nlog = \\"0.4\\"\\n\\n
We use serde
and serde-wasm-bindgen
for serialization between Rust and JavaScript/TypeScript. As mentioned above, we’re going to encrypt messages and images, and we’ll use the ecies
and hex
crates to do so.
For storage, we’ll use surrealdb
. Since we’re building a Wasm-based application, we also add wasm-bindgen
and wasm-bindgen-futures
, so we can create bindings for JS/TS into our Rust code and can run asynchronous functions on the Rust side as well.
For our network layer via Nostr, we’ll use nostr-sdk
and finally, we add the log
and console_log
crate, so our Rust logs show up in the browser and console_error_panic_hook
so panics are logged in the browser as well.
Since we’re not only building a Wasm-based Rust project but will embed this within a progressive web app, we also need to create some files for that purpose.
\\nWe’ll start with a basic index.html
file:
<!DOCTYPE html>\\n<html lang=\\"en\\">\\n<head>\\n <meta charset=\\"UTF-8\\">\\n <meta name=\\"viewport\\" content=\\"width=device-width, initial-scale=1.0\\">\\n <meta name=\\"theme-color\\" content=\\"#000000\\">\\n <link rel=\\"manifest\\" href=\\"/manifest.json\\">\\n <title>Example PWA with Rust</title>\\n</head>\\n<body>\\n <h1>Example PWA using Rust</h1>\\n <script type=\\"module\\" src=\\"/main.js\\"></script>\\n</body>\\n</html>\\n\\n
This references a basic main.js
, which can stay empty for now, and a manifest.json
, which we’ll create next:
{\\n \\"short_name\\": \\"RustExamplePWA\\",\\n \\"name\\": \\"Rust Example Progressive Web App\\",\\n \\"icons\\": [\\n {\\n \\"src\\": \\"img/logo_192x192.png\\",\\n \\"sizes\\": \\"192x192\\",\\n \\"type\\": \\"image/png\\"\\n },\\n {\\n \\"src\\": \\"img/logo_512x512.png\\",\\n \\"sizes\\": \\"512x512\\",\\n \\"type\\": \\"image/png\\"\\n }\\n ],\\n \\"start_url\\": \\"/\\",\\n \\"background_color\\": \\"#000000\\",\\n \\"theme_color\\": \\"#dddddd\\",\\n \\"display\\": \\"standalone\\"\\n}\\n\\n
Here, we define the name, short name, icons, and some other settings for the progressive web app. This is relevant for the case where the app is installed locally on a phone.
\\nThe following is our service-worker.js
:
self.addEventListener(\\"install\\", (event) => {\\n event.waitUntil(\\n caches.open(\\"pwa-cache\\").then((cache) => {\\n return cache.addAll([\\"/\\", \\"/index.html\\", \\"/main.js\\", \\"/img/logo_192x192.png\\", \\"/img/logo_512x512.png\\", \\"/pkg/index.js\\", \\"/pkg/index_bg.wasm\\"]);\\n })\\n );\\n});\\n\\nself.addEventListener(\\"fetch\\", (event) => {\\n event.respondWith(\\n caches.match(event.request).then((response) => {\\n return response || fetch(event.request);\\n })\\n );\\n});\\n\\n
The only thing we add here is that if the app is installed, all relevant files are cached, so the app works fully offline. The files include the manually created files and the generated Wasm files.
\\nFinally, we’ll create some cute icons using gen AI and add them to the img
folder:
That’s it for the initial setup! Let’s get to the Wasm build pipeline next so we can fully build our project.
\\nTo build a Wasm binary that can be used in the browser from our Rust code, we use the wasm-pack tool, and we have to compile to the wasm32-unknown-unknown
target, which can be added to your local Rust installation like this:
rustup target add wasm32-unknown-unknown\\n\\n
Also, in .cargo/config.toml
, we need to add some additional flags:
[build]\\ntarget = \\"wasm32-unknown-unknown\\"\\nrustflags = [\\"--cfg\\", \\"getrandom_backend=\\\\\\"wasm_js\\\\\\"\\"]\\n\\n
This configures what cargo
, when run inside of this project, uses by default.
The target
setting is to make sure our cargo commands, like check
and clippy
, as well as our editor, use the wasm32
target by default.
The rustflags
setting is something we need in this case for the given versions of SurrealDB
and getrandom
, which configures the getrandom
crate to use the wasm_js
backend. This means that random number generation doesn’t use an OS-specific backend but a JavaScript-based one, as documented here.
Using the getrandom
crate won’t be necessary for every Wasm-based project, but it’s good to be aware of it since some dependencies can have specifics regarding being able to be compiled for Wasm.
Now, we can build the project and start to implement our app.
\\nWhen building our app, we’ll start with the SurrealDB-based storage layer. The first thing we need to do is to initialize a database connection (underneath, in the browser, SurrealDB uses IndexedDB) that we can use from any further function call into our API.
\\nFor this purpose, we’ll use a thread_local
variable to keep the database connection around. To be able to initialize it and re-use it, we use a RefCell
with an Option
to the database connection inside:
use surrealdb::{engine::any::Any, Surreal};\\nuse std::cell::RefCell;\\n\\nthread_local! {\\n static SURREAL_DB: RefCell<Option<Surreal<Any>>> = const { RefCell::new(None) };\\n}\\n\\n
Using this, we can now initialize the database and create a get_db()
function, with which we can get a handle to it in any other API function:
async fn init_surreal_db() {\\n let db = surrealdb::engine::any::connect(\\"indxdb://default\\")\\n .await\\n .unwrap();\\n db.use_ns(\\"\\").use_db(\\"default\\").await.unwrap();\\n SURREAL_DB.with(|surreal_db| {\\n let mut db_ref = surreal_db.borrow_mut();\\n if db_ref.is_none() {\\n *db_ref = Some(db);\\n }\\n });\\n}\\n\\nfn get_db() -> Surreal<Any> {\\n SURREAL_DB\\n .with(|db| db.borrow().clone())\\n .expect(\\"is initialized\\")\\n}\\n\\n
Above, we initialized a connection to IndexedDB. Then we also initialize logging so we can log from Rust to the browser:
\\nfn init_logging() {\\n std::panic::set_hook(Box::new(console_error_panic_hook::hook));\\n console_log::init_with_level(log::Level::Info).unwrap();\\n}\\n\\n
We also call our init functions within an initialize()
function, which we expose to JS/TS via wasm_bindgen
:
#[wasm_bindgen]\\npub async fn initialize() {\\n init_logging();\\n init_surreal_db().await;\\n}\\n\\n
Once we build this app, in the main.js
file we generated before, we can import the initialize
function and call it:
import init, {\\n initialize,\\n} from \'../pkg/index.js\';\\n\\nasync function run() {\\n await init();\\n await initialize();\\n}\\n\\nif (\\"serviceWorker\\" in navigator) {\\n navigator.serviceWorker\\n .register(\\"/service-worker.js\\")\\n .then(() => console.log(\\"registered service worker\\"))\\n .catch((err) => console.error(\\"registration of service worker failed\\", err));\\n}\\n\\nawait run();\\n\\n
Above, we initialized the serviceWorker
that we defined before, initializing wasm
using the init()
function, and initializing our API using our exported initialized()
function.
This should serve well as a basis for our API layer. Let’s implement encryption next:
\\nuse ecies::utils::generate_keypair;\\nuse serde::{Deserialize, Serialize};\\n\\n#[derive(Serialize, Deserialize, Clone)]\\npub struct Keys {\\n pub sk: String,\\n pub pk: String,\\n}\\n\\npub fn generate_encryption_keys() -> Keys {\\n let (sk, pk) = generate_keypair();\\n Keys {\\n sk: hex::encode(sk.serialize()),\\n pk: hex::encode(pk.serialize()),\\n }\\n}\\n\\n
We define a Keys
struct for storing and retrieving keys and a function to create a new ECIES key pair.
Then, we implement the persistence logic so we can both fetch these keys from the database and also store a key pair there:
\\nasync fn get_encryption_keys_from_db() -> Option<Keys> {\\n let db = get_db();\\n let res: Option<Keys> = db.select((\\"keys\\", \\"encryption\\")).await.unwrap();\\n res\\n}\\n\\nasync fn save_encryption_keys_to_db(keys: &Keys) {\\n let db = get_db();\\n let _: Option<Keys> = db\\n .create((\\"keys\\", \\"encryption\\"))\\n .content(keys.clone())\\n .await\\n .unwrap();\\n}\\n\\n
We use the above-defined get_db()
function to get our stored DB connection and use the SQL-like SurrealDB SDK API to save keys to and fetch from the keys
table.
In lib.rs/initialize
, we generate the keys on startup and save them in the database:
...\\n save_encryption_keys_to_db(&generate_encryption_keys()).await;\\n...\\n\\n
Now that our basic key management is implemented (of course, in a real production application, we wouldn’t just save the private key as plaintext in the database, but rather use an API such as the Web Crypto API).
\\nNext, let’s implement utility functions for encrypting and decrypting bytes:
\\nfn encrypt(input: &[u8], key: &str) -> Vec<u8> {\\n let decoded_key = hex::decode(key).unwrap();\\n ecies::encrypt(&decoded_key, input).unwrap()\\n}\\n\\nfn decrypt(input: &[u8], key: &str) -> Vec<u8> {\\n let decoded_key = hex::decode(key).unwrap();\\n let decoded_msg = input;\\n ecies::decrypt(&decoded_key, &decoded_msg).unwrap()\\n}\\n\\n
We simply use the ecies
crate API, with the public and private key passed in as hex-encoded strings to implement encryption and decryption.
Now that we have the basics of storage and encryption in place, let’s get to encrypting and storing messages and files!
\\nFirst, let’s define some basic data types:
\\n#[derive(Serialize, Deserialize, Clone)]\\npub struct Message {\\n pub msg: String,\\n}\\n\\n#[derive(Serialize, Deserialize, Clone)]\\npub struct File {\\n pub name: String,\\n pub bytes: Vec<u8>,\\n}\\n\\n
Then, we start by implementing a function to fetch all local messages from a msg
table and a function that we expose, which gets the encryption keys, fetches the messages, decrypts them, and serializes them so they can be used from JavaScript/TypeScript. Let’s take a look:
use wasm_bindgen::prelude::*;\\n\\nasync fn fetch_messages() -> Vec<Message> {\\n let db = get_db();\\n let msgs: Vec<Message> = db.select(\\"msg\\").await.unwrap();\\n msgs\\n}\\n\\n#[wasm_bindgen]\\npub async fn fetch_and_decrypt_local_messages() -> JsValue {\\n let encryption_keys = get_encryption_keys_from_db().await.unwrap();\\n let msgs = fetch_messages().await;\\n let decrypted: Vec<String> = msgs\\n .into_iter()\\n .map(|msg| {\\n std::str::from_utf8(&decrypt(\\n &hex::decode(msg.msg.as_bytes()).unwrap(),\\n &encryption_keys.sk,\\n ))\\n .unwrap()\\n .to_owned()\\n })\\n .collect();\\n serde_wasm_bindgen::to_value(&decrypted).unwrap()\\n}\\n\\n
The database fetching is the same as with the keys — nothing too interesting here. But when we check out the fetch_and_decrypt_local_messages
function, which is annotated using wasm_bindgen
again, thus being exposed to JS/TS, it has a JsValue
as a return value.
We can’t just pass Rust values to and from JS/TS. For some values, such as strings and numbers, this works fine, but for more complex data types, there are some limits, and we have to serialize them in some way.
\\nOne way to do this is to just serialize it to a JSON string and then parse it in JS/TS. In this case, we use the serde_wasm_bindgen
crate, which lets us serialize to wasm_bindgen::JsValue
and deserialize from the same type, which makes it possible to serialize to arbitrary JavaScript values.
So, after fetching the keys and messages, we iterate the result and decrypt the messages by converting them to bytes, decrypting, and converting them to a UTF-8 string again.
\\nFinally, we use serde_wasm_bindgen
to serialize the Vec<String>
to be used in JS/TS.
Cool! We’ll implement saving messages later when we implement our network layer. Now, though, we still have our images to take care of. We can just re-use the logic we used before to implement encrypting and storing and retrieving and decrypting images to and from the storage:
\\n#[wasm_bindgen]\\npub async fn save_image(file_name: &str, file_bytes: Vec<u8>) {\\n let encryption_keys = get_encryption_keys_from_db().await.unwrap();\\n let db = get_db();\\n let f = File {\\n name: file_name.to_owned(),\\n bytes: encrypt(&file_bytes, &encryption_keys.pk),\\n };\\n let _: Option<File> = db.create(\\"img\\").content(f).await.unwrap();\\n}\\n\\n#[wasm_bindgen]\\npub async fn fetch_images() -> JsValue {\\n let encryption_keys = get_encryption_keys_from_db().await.unwrap();\\n let db = get_db();\\n let files: Vec<File> = db.select(\\"img\\").await.unwrap();\\n let decrypted: Vec<File> = files\\n .into_iter()\\n .map(|f| File {\\n name: f.name,\\n bytes: decrypt(&f.bytes, &encryption_keys.sk),\\n })\\n .collect();\\n serde_wasm_bindgen::to_value(&decrypted).unwrap()\\n}\\n\\n
We fetch the encryption keys, get our DB connection, and create a File
, with the bytes being the encrypted bytes of the incoming image.
For fetching images, we do the same as above in the fetch_and_decrypt_messages
function.
That’s it for the storage layer. Let’s implement the network layer next.
\\nFor our Nostr-based networking layer, we start the same way as with storage, by initializing a Nostr client that we can use from API functions using thread_local
and RefCell
:
thread_local! {\\n static NOSTR_CLIENT: RefCell<Option<nostr_sdk::Client>> = const { RefCell::new(None) };\\n ...\\n}\\n\\n
Then, we implement an init_nostr_client
function, which takes a Nostr private key as a parameter:
async fn init_nostr_client(private_key: &str) {\\n let keys = nostr_sdk::Keys::parse(private_key).unwrap();\\n let client = nostr_sdk::Client::builder().signer(keys.clone()).build();\\n client.add_relay(\\"wss://relay.damus.io\\").await.unwrap();\\n\\n client.connect().await;\\n let meta = nostr_sdk::Metadata::new()\\n .name(\\"wasmTestUser\\")\\n .display_name(\\"wasmTestUser\\");\\n client.set_metadata(&meta).await.unwrap();\\n\\n NOSTR_CLIENT.with(|cl| {\\n let mut client_ref = cl.borrow_mut();\\n if client_ref.is_none() {\\n *client_ref = Some(client);\\n }\\n });\\n}\\n\\nfn get_nostr_client() -> nostr_sdk::Client {\\n NOSTR_CLIENT\\n .with(|client| client.borrow().clone())\\n .expect(\\"is initialized\\")\\n}\\n\\n
We transform the given key to a Nostr Keys
and create a new Client
. We add a relay — in this case, just the default damus.io (one of the Nostr social network clients) relay and connect to it.
Then, we add some dummy metadata for our user and set the client to our thread_local
variable.
Now, with the get_nostr_client
function, we can adapt our initialize
function to check if we already have keys in the storage or generate new ones and initialize our database and networking layer:
#[wasm_bindgen]\\npub async fn initialize() -> String {\\n init_logging();\\n init_surreal_db().await;\\n if let Some(nostr_keys) = get_nostr_keys_from_db().await {\\n init_nostr_client(&nostr_keys.sk).await;\\n nostr_keys.pk\\n } else {\\n let nostr_keys = generate_nostr_keys();\\n save_nostr_keys_to_db(&nostr_keys).await;\\n save_encryption_keys_to_db(&generate_encryption_keys()).await;\\n nostr_keys.pk\\n }\\n}\\n\\n
We initialize logging and storage and, if we don’t have keys yet, generate and store them. Once we have the keys, we initialize the Nostr client.
\\nNext, let’s define the Event
data structure that we’ll use to fetch Nostr messages from the network:
#[derive(Serialize, Deserialize, Clone)]\\npub struct Event {\\n pub id: String,\\n pub pk: String,\\n pub content: String,\\n pub ts: u64,\\n}\\n\\n
We also define the function to generate initial Nostr keys:
\\npub fn generate_nostr_keys() -> Keys {\\n let keys = nostr_sdk::Keys::generate();\\n Keys {\\n sk: keys.secret_key().to_secret_hex(),\\n pk: keys.public_key().to_hex(),\\n }\\n}\\n\\n
Now we can get to sending messages to the Nostr network:
\\nuse wasm_bindgen_futures::spawn_local;\\nuse log::info;\\n\\n#[wasm_bindgen]\\npub async fn send_nostr_msg(msg: &str) {\\n let msg = msg.to_owned();\\n let msg_clone = msg.clone();\\n spawn_local(async move {\\n let event_builder = nostr_sdk::EventBuilder::text_note(msg_clone);\\n let event_id = get_nostr_client()\\n .send_event_builder(event_builder)\\n .await\\n .unwrap();\\n info!(\\"sent event, event id: {}\\", event_id.id());\\n });\\n let encryption_keys = get_encryption_keys_from_db().await.unwrap();\\n let encrypted = hex::encode(encrypt(msg.as_bytes(), &encryption_keys.pk));\\n save_encrypted_msg(&encrypted).await;\\n}\\n\\n
In this function, we showcase how we can actually run something asynchronously in Rust using wasm_bindgen_futures::spawn_local
, even if it’s called from JS/TS. This way, we can run entire background processes on the event loop in Wasm.
We have to clone the message so we can pass it to the async function. Then, we use the Nostr EventBuilder
to create a text note and send the event using the Nostr client we created and stored above.
Finally, we encrypt the message and store it in IndexedDB with the following function:
\\nasync fn save_encrypted_msg(encrypted: &str) {\\n let db = get_db();\\n let _: Option<Message> = db\\n .create(\\"msg\\")\\n .content(Message {\\n msg: encrypted.to_owned(),\\n })\\n .await\\n .unwrap();\\n}\\n\\n
Nothing too surprising here; we simply save the given encrypted message into the msg
table.
With event sending in place, we can now implement fetching events from the Nostr network:
\\n#[wasm_bindgen]\\npub async fn fetch_nostr_events(from: &str) -> Result<JsValue, JsValue> {\\n let filter = nostr_sdk::Filter::new()\\n .author(nostr_sdk::PublicKey::parse(from).unwrap())\\n .kind(nostr_sdk::Kind::TextNote);\\n let events = get_nostr_client()\\n .fetch_events(filter, std::time::Duration::from_secs(10))\\n .await\\n .unwrap();\\n Ok(serde_wasm_bindgen::to_value(\\n &events\\n .into_iter()\\n .map(|e| Event {\\n id: e.id.to_hex(),\\n pk: e.pubkey.to_hex(),\\n content: e.content,\\n ts: e.created_at.as_u64(),\\n })\\n .collect::<Vec<Event>>(),\\n )\\n .unwrap())\\n}\\n\\n
We first create a Nostr Filter
, which we configure to just filter for TextNote
events from ourselves (author
). Here, we could also define other filters to get other kinds of messages from the network, such as only fetching events that have been published in the last X days or hours.
Then, we use our Nostr client to fetch events based on this filter with a timeout of 10 seconds. The events we get from Nostr, transformed into our Event
struct, are then serialized to JsValue
s using serde_wasm_bindgen
.
That’s it for our simple networking layer. Of course, we could have also used a crate such as reqwest to make arbitrary HTTP requests as well.
\\nFinally, let’s build a very simple GUI to interact with the API we built in Rust for Wasm:
\\n<!DOCTYPE html>\\n<html lang=\\"en\\">\\n<head>\\n <meta charset=\\"UTF-8\\">\\n <meta name=\\"viewport\\" content=\\"width=device-width, initial-scale=1.0\\">\\n <meta name=\\"theme-color\\" content=\\"#000000\\">\\n <link rel=\\"manifest\\" href=\\"/manifest.json\\">\\n <title>Example PWA with Rust</title>\\n</head>\\n<body>\\n <h1>Example PWA using Rust</h1>\\n <div>Local npub: <span id=\\"npub\\"></span></div>\\n <input type=\\"text\\" id=\\"inp\\"/>\\n <button id=\\"sb\\">Send</button>\\n <button id=\\"fetch\\">Fetch Nostr Events</button>\\n <h3>Remote Nostr Events:</h3>\\n <div id=\\"remote_events\\"></div>\\n <h3>Local decrypted messages:</h3>\\n <div id=\\"local_messages\\"></div>\\n <div>\\n <h2>Uploads</h2>\\n <div>\\n <input type=\\"file\\" id=\\"file_input\\" />\\n <button type=\\"button\\" id=\\"upload\\">upload</button>\\n <button type=\\"button\\" id=\\"fetch_images\\">Fetch Images</button>\\n </div>\\n <div id=\\"files\\">\\n </div>\\n </div>\\n <script type=\\"module\\" src=\\"/main.js\\"></script>\\n</body>\\n</html>\\n\\n
We add a few things to index.html
. First, we create a div to display our npub
, which is the public key and, at the same time, the identifier on the Nostr network.
Then, we add a text field with a submit button to send events and a button to trigger fetching events from the Nostr networks.
\\nThen, we add containers for both the remote events and the local messages.
\\nFinally, we add a file input and two buttons — one to upload and one to fetch and display all stored images — as well as a container for displaying the files.
\\nOf course, in a real application, we could also use frameworks such as Svelte, Vue, or React to build a GUI and integrate it with our generated Wasm code.
\\nIn this case, we will just use vanilla JavaScript to import and use our API functions:
\\nimport init, {\\n save_image,\\n fetch_images,\\n initialize,\\n send_nostr_msg,\\n fetch_nostr_events,\\n fetch_and_decrypt_local_messages\\n} from \'../pkg/index.js\';\\n\\nlet npub;\\nasync function run() {\\n await init();\\n npub = await initialize();\\n document.getElementById(\\"npub\\").textContent = npub;\\n await refresh_local_messages();\\n}\\n\\nawait run();\\n\\n
We simply import the exported functions from the generated pkg/index.js
in the same way we did before with the initialize
function.
Then, we store the npub
returned from the new initialize
function, set it to the container for the npub, and call a function to refresh the local messages, so on startup, we display all locally stored messages:
async function refresh_local_messages() {\\n let local_messages = await fetch_and_decrypt_local_messages();\\n let container = document.getElementById(\\"local_messages\\");\\n container.innerHTML = \\"\\";\\n local_messages.forEach(str => {\\n let div = document.createElement(\\"div\\");\\n div.textContent = str;\\n container.appendChild(div);\\n });\\n}\\n\\n
Here, we just call the fetch_and_decrypt_local_messages
Wasm function, iterate the results, and add a div
for each of them into the local messages container.
Next, let’s implement the event handlers for sending messages and fetching remote messages from Nostr:
\\ndocument.getElementById(\\"sb\\").addEventListener(\\"click\\", async () => {\\n let input = document.getElementById(\\"inp\\").value;\\n await send_nostr_msg(input);\\n await refresh_local_messages();\\n});\\n\\ndocument.getElementById(\\"fetch\\").addEventListener(\\"click\\", async () => {\\n await refresh_remote_messages();\\n});\\n\\n
When clicking the Send
button, we send the text from the inp
text input to the Nostr network using the send_nostr_msg
Wasm function and call refresh_local_messages
, so the new message is immediately shown.
Similarly, on clicking the Fetch
button, we re-fetch and refresh the remote messages as well with the following function:
async function refresh_remote_messages() {\\n let events = await fetch_nostr_events(npub);\\n let container = document.getElementById(\\"remote_events\\");\\n container.innerHTML = \\"\\";\\n events.forEach(event => {\\n let div = document.createElement(\\"div\\");\\n div.textContent = event.content + \\" at \\" + event.ts + \\" (\\" + event.id + \\")\\";\\n container.appendChild(div);\\n });\\n}\\n\\n
We fetch the Nostr events using the Wasm function fetch_nostr_events
and put a div with the content, timestamp, and ID of the events into the message container.
Finally, let’s implement the file upload and fetching logic in our very basic GUI:
\\ndocument.getElementById(\\"upload\\").addEventListener(\\"click\\", async () => {\\n const file = document.getElementById(\\"file_input\\").files[0];\\n if (!file) return;\\n\\n const name = file.name;\\n const bytes = await file.arrayBuffer();\\n const data = new Uint8Array(bytes);\\n\\n await save_image(name, data);\\n console.log(\\"upload successful\\");\\n});\\n\\ndocument.getElementById(\\"fetch_images\\").addEventListener(\\"click\\", async () => {\\n let files = await fetch_images();\\n let container = document.getElementById(\\"files\\");\\n container.innerHTML = \\"\\";\\n files.forEach(file => {\\n let arr = new Uint8Array(file.bytes);\\n let blob = new Blob([arr]);\\n let url = URL.createObjectURL(blob);\\n\\n console.log(\\"file\\", file, url, blob);\\n let img = document.createElement(\\"img\\");\\n img.src = url;\\n container.appendChild(img);\\n });\\n});\\n\\n
For uploading, we take the file from the file input and transform the bytes of the file to a Uint8Array
, so we can pass it to the Rust-based Wasm API as a Vec<u8>
. We also send the file name and call the Wasm function save_image
to encrypt and store the image in IndexedDB.
When clicking on the Fetch Images
button, we call the Wasm API function fetch_images
and show the fetched and decrypted images in the files container.
That’s it for our very basic GUI. Now, let’s see if all of this works!
\\nAs mentioned above, we use wasm-pack
to build our Wasm binary:
wasm-pack build --dev --target web --out-name index\\n\\n
If we run this and check the pkg
folder, we’ll see the following files:
index.js
: The JavaScript entry pointindex.d.ts
: TypeScript bindingsindex_bg.wasm
: The Wasm binaryindex_bg.wasm.d.ts
: TypeScript bindings for the Wasm binarypackage.json
: Generated for the package, to publish it to npm, or import it as a dependency, for exampleThen, we can use a local http-server (but really, any http server that can serve .wasm
files will do) to serve the files, in this case, to http://localhost:8080/.
First, we can observe that Chrome shows us a button to install the app as a PWA (on the right):
\\nIf we do this, the app will be available on the Chrome apps, and we can also create desktop or mobile icons to immediately start the app from our local systems. And since all files are cached, we could also use the app without a network connection.
\\nOf course, to really implement offline-first functionality properly, we would always need to think about conflict resolution and what happens if someone wants to interact with the network but it is offline, which is out of scope for this article but an important part of PWAs:
\\nFinally, let’s check out our application. We can see that a Nostr npub is created and that already sent messages are fetched locally at startup:
\\nWe can send new messages, fetch remote messages, and even upload and fetch images! Everything seems to work — nice!
\\nVery cool! Everything we set out to do works. We’re able to leverage our Rust-based Wasm API to fully interact with browser storage and browser networking and even implement local encryption and decryption.
\\nBesides manual testing, we can, of course, leverage normal unit tests and also tests that are executed either in node
or the browser using wasm-pack test
.
Also, it’s possible to debug the Wasm code in Chrome by creating a source map and serving it together with the Wasm code.
\\nYou can find the full code for this tutorial on GitHub. You can also play around with this simple example PWA here.
\\nIn this article, we built a very simple progressive web app leveraging Rust via WebAssembly. The Rust ecosystem around Wasm is getting more and more mature, especially compared to a few years ago.
\\nHowever, some areas are still under construction, and while many things can be achieved, it isn’t always straightforward. For example, it’s possible to leverage multithreading support in web-workers via Wasm from Rust, as described here, or using crates such as wasm-bindgen-rayon, but getting it to work includes additional steps, like rebuilding the Wasm target with additional flags.
\\nStill, we were able to build an app with powerful storage and networking integration, encryption support, and a convenient API for consuming this via a JavaScript or TypeScript frontend. The TypeScript bindings generated by wasm-bindgen
are already pretty good, but I also had good experiences using additional tools such as Tsify to expose the Rust API to any frontend.
Also, packages built using wasm-pack
for the web can be rather seamlessly integrated into existing projects, even if they use complex build tools such as Vite or webpack, as documented here.
Overall, both PWAs and especially WebAssembly are making big strides, and we’re slowly moving into a direction where it’s possible to build near-native performance, secure, sandboxed, but fully featured and rich apps, which run anywhere. I, for one, am looking forward to that future. 🙂
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nGrid
component?\\n Grid
component\\n In this article, we’ll explore MUI’s Grid system in depth. Let’s get started!
\\nGrid
component?Material Design’s grid system is implemented in MUI using the Grid
component. Under the hood, the Grid
component uses Flexbox properties for greater flexibility.
There are two types of grid components, container
and item
. To make the layout fluid and adaptive to different screen sizes, the item
widths are set in percentages, and padding creates spacing between each individual item
. There are five types of grid breakpoints, including xs
, sm
, md
, lg
, and xl
.
Grid
componentImport the Grid
component into the JavaScript file using the following code:
import Grid from \'@mui/material/Grid\';\\n\\n
The container
prop gives the Grid
component the CSS properties of a flex
container, and the item
prop gives the CSS properties of a flex
item. Every item
must be wrapped in a container
, as shown below:
<Grid\\n container\\n // ...this parent component will be a flex-box container\\n>\\n <Grid\\n // ... this child component will be a flex item for the parent Grid\\n >\\n <Paper></Paper> {/* A simple flex item */}\\n </Grid>\\n</Grid>\\n\\n
Now that you’ve seen the basic structure of the MUI Grid
component, let’s dive a little deeper into its background and usage.
Editor’s note: This post was updated by Hussain Arif in April 2025 to reflect the latest updates from MUI v5, explain practical use cases such as nested grids and column spanning, and compare MUI Grid
with other layouts, such as CSS Grid and Flexbox.
Material Design is a popular design system developed by Google in 2014. It is a visual language that synthesizes the classic principles of good design with innovations in technology and science.
\\nGoogle and many other tech companies use Material Design extensively across their brand and products. In 2021, Google revamped its design system, making it more flexible for designers to create custom themes.
MUI is a React library that implements Google’s Material Design and its grid system. Widely used in Android app development, MUI defines a set of principles and guidelines for designing UI components. The creators of this technology shortened the project’s name from Material UI to MUI in September 2021, clarifying that the project was never affiliated with Google.
\\nMUI comes with prebuilt UI components, including buttons, navbars, navigation drawers, and, most importantly, the grid system. MUI v6 offered new features, an updated design, performance optimization, enhanced theming options, and another framework geared towards dashboards.
\\nA grid system defines a set of measurements to place elements or components on the page based on successive columns and rows. The grid system in Material Design is visually balanced. It adapts to screen sizes and orientation, ensuring a consistent layout across pages.
\\nThe grid system consists of three components:
\\nHere’s a diagram to visualize these components:
\\nGrid
vs CSS Grid vs Flexbox: What’s the difference?MUI’s Grid
component uses a standard CSS Flexbox to render child items to the browser. According to Stack Overflow, the difference between MUI and Flexbox is that MUI allows developers to use breakpoints to allow for responsive web apps.
Furthermore, it’s important to remember that the Material Grid system does not support auto-placement of children. If you need auto-placement, the team suggests opting for the CSS Grid instead.
\\nThe MUI library provides React components that implement Google’s Material Design. Let’s explore implementing Material Design in a React app using MUI.
\\nRun the command below to install the required dependencies in your project:
\\nnpm install @mui/material @emotion/react @emotion/styled\\n\\n
Material Design uses Roboto as the default font, so don’t forget to add it as well:
\\nnpm install @fontsource/roboto\\n\\n
Alternately, you can omit the snippet above and add the following imports to the entry point of your React app instead, which should normally be the main.tsx
file:
import \\"@fontsource/roboto/300.css\\";\\nimport \\"@fontsource/roboto/400.css\\";\\nimport \\"@fontsource/roboto/500.css\\";\\nimport \\"@fontsource/roboto/700.css\\";\\n\\n
All the components are isolated, self-supporting, and only inject the styles that they need to present. To get things going, let’s use the example below, which creates a simple button component:
\\nimport * as React from \\"react\\";\\n//import the Button component from MUI\\nimport Button from \\"@mui/material/Button\\";\\nexport default function App() {\\n return (\\n //Draw a simple button\\n <Button variant=\\"contained\\" color=\\"primary\\">\\n MUI Demo\\n </Button>\\n );\\n}\\n\\n
Now that we’ve briefly learnt how to use the MUI library, let’s use the Grid API. This line of code imports the Grid component:
\\nimport Grid from \\"@mui/material/Grid\\";\\n#make sure it\'s not:\\nimport Grid from \\"@mui/material/GridLegacy\\"; # this is outdated!\\n\\n
Let’s look at the various props you can provide to the container
and item
to build a flexible layout.
You can apply the spacing
prop to the Grid container
to create spaces between each individual grid item
. In the following example, we interactively change the spacing
prop value by passing the value through a set of radio button components:
import Grid from \\"@mui/material/Grid2\\";\\nimport FormLabel from \\"@mui/material/FormLabel\\";\\nimport FormControlLabel from \\"@mui/material/FormControlLabel\\";\\nimport RadioGroup from \\"@mui/material/RadioGroup\\";\\nimport Radio from \\"@mui/material/Radio\\";\\nimport Paper from \\"@mui/material/Paper\\";\\nimport { useState } from \\"react\\";\\nimport { FormControl } from \\"@mui/material\\";\\n//create our style\\nconst styles = {\\n paper: {\\n height: 140,\\n width: 100,\\n backgroundColor: \\"pink\\",\\n },\\n};\\nconst SpacingGridDemo: React.FC = () => {\\n //create our spacing variable. Default value will be 2\\n const [spacing, setSpacing] = useState(2);\\n //when executed, change the value of spacing Hook to chosen value\\n const handleChange = (event: React.ChangeEvent<HTMLInputElement>): void => {\\n setSpacing(Number(event.target.value));\\n };\\n return (\\n <div>\\n <div>\\n {/*This container will be aligned in the center */}\\n {/* Spacing will vary depending on user choice.*/}\\n <Grid container spacing={spacing}>\\n {/*Render 4 empty black boxes as items of this container*/}\\n {[0, 1, 2, 3].map((value) => (\\n <Grid key={value}>\\n <Paper style={styles.paper}>{value}</Paper>\\n </Grid>\\n ))}\\n </Grid>\\n </div>\\n <div>\\n <Paper>\\n <div>\\n {/* Show user\'s chosen spacing value*/}\\n <FormLabel>{spacing}</FormLabel>\\n <FormControl>\\n <RadioGroup\\n name=\\"spacing\\"\\n aria-label=\\"spacing\\"\\n value={spacing.toString()}\\n onChange={handleChange}\\n row\\n >\\n {/*Render multiple spacing values in a form */}\\n {[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10].map((value) => (\\n <FormControlLabel\\n key={value}\\n value={value.toString()}\\n control={<Radio />}\\n label={value.toString()}\\n />\\n ))}\\n </RadioGroup>\\n </FormControl>\\n </div>\\n </Paper>\\n </div>\\n </div>\\n );\\n};\\nexport default SpacingGridDemo;\\n\\n
We can create layouts for different screen sizes by using the breakpoint props, xs
, sm
, md
, lg
, and xl
on the grid items. Fluid grids can scale the grid items and resize content within them:
//create our styles\\nconst classes = {\\n root: {\\n flexGrow: 1,\\n },\\n paper: {\\n padding: 20,\\n color: \\"blue\\",\\n fontFamily: \\"Roboto\\",\\n },\\n};\\nconst BreakpointGridDemo: React.FC = () => {\\n return (\\n <div style={classes.root}>\\n <Grid container spacing={3}>\\n {/*Create items with different breakpoints */}\\n {/*For example,This item will be 12 units wide on extra small screens */}\\n <Grid size={{ xs: 12 }}>\\n <Paper style={classes.paper}>xs=12</Paper>\\n </Grid>\\n {/*This item will be 12 units on extra small screens */}\\n {/*But will be 6 units on small screens */}\\n <Grid size={{ xs: 12, sm: 6 }}>\\n <Paper style={classes.paper}>xs=12 sm=6</Paper>\\n </Grid>\\n <Grid size={{ xs: 12, sm: 6 }}>\\n <Paper style={classes.paper}>xs=12 sm=6</Paper>\\n </Grid>\\n <Grid size={{ xs: 6, sm: 3 }}>\\n <Paper style={classes.paper}>xs=6 sm=3</Paper>\\n </Grid>\\n <Grid size={{ xs: 6, sm: 3 }}>\\n <Paper style={classes.paper}>xs=6 sm=3</Paper>\\n </Grid>\\n <Grid size={{ xs: 6, sm: 3 }}>\\n <Paper style={classes.paper}>xs=6 sm=3</Paper>\\n </Grid>\\n <Grid size={{ xs: 6, sm: 3 }}>\\n <Paper style={classes.paper}>xs=6 sm=3</Paper>\\n </Grid>\\n </Grid>\\n </div>\\n );\\n};\\nexport default BreakpointGridDemo;\\n\\n
\\n
The auto-layout feature allows the grid items to auto-resize and occupy the available space without having to specify the width of the item. If you set the width on one of the items, the child items would automatically resize and share the available space.
\\nIn the following example, you can see that the items around the xs={6}
item auto-resize, resulting in a perfect layout:
const AutoGridDemo: React.FC = () => {\\n return (\\n <Box sx={{ flexGrow: 1 }}>\\n <Grid container spacing={2}>\\n {/*They all will have default widths */}\\n {[1, 2, 3, 4].map((item) => (\\n <Grid key={item} size=\\"grow\\">\\n <Paper style={classes.paper}>size=grow</Paper>\\n </Grid>\\n ))}\\n </Grid>\\n <Grid container spacing={2}>\\n {[1, 2].map((item) => (\\n <Grid size=\\"grow\\" key={item}>\\n <Paper style={classes.paper}>size=grow</Paper>\\n </Grid>\\n ))}\\n {/*However, this component will have 6 units of space */}\\n <Grid size={6}>\\n <Paper style={classes.paper}>size=6</Paper>\\n </Grid>\\n </Grid>\\n </Box>\\n );\\n};\\n\\n
We can use grids within each other. In the demo below, the container
and item
props are combined so the Grid
component can act like both a container and an item. This allows us to have another set of the grid inside the grid item. In this case, it’s the <InnerGrid />
component:
const InnerGrid: React.FC = () => {\\n return (\\n <Grid container spacing={3}>\\n {[1, 2, 3].map((item) => (\\n <Grid key={item}>\\n <Paper style={classes.paper}>Inner Grid item</Paper>\\n </Grid>\\n ))}\\n </Grid>\\n );\\n};\\nconst AutoGridDemo: React.FC = () => {\\n return (\\n <Box sx={{ flexGrow: 1 }}>\\n <h1>Inner Grid 1</h1>\\n <Grid container>\\n <InnerGrid />\\n </Grid>\\n <h1>Inner Grid 2</h1>\\n <Grid container>\\n <InnerGrid />\\n </Grid>\\n </Box>\\n );\\n};\\n\\n
Note: Nested Grid
containers should be a direct child of another grid container. If there are any non-grid components in between, the new container will act as a root container:
<Grid container>\\n <Grid container> {/*Inherits columns and spacing from the parent item*/}\\n <div>\\n <Grid container> {/*Root container with its own properties.*/}\\n\\n
The MUI library also allows for changing the column or spacing depending on the breakpoint. Here’s a code sample that demonstrates breakpoints in action:
\\nconst ResponsiveDemo: React.FC = () => {\\n return (\\n <Box flexGrow={1} padding={2}>\\n <Grid\\n container\\n //on smaller screens, the spacing is 2\\n //on medium and larger screens, the spacing is 15\\n spacing={{ xs: 2, md: 15 }}\\n columns={{ xs: 2, sm: 4, md: 18 }}\\n >\\n {Array.from(Array(6)).map((_, index) => (\\n //similarly, the size of the grid item is different on different screen sizes\\n <Grid key={index} size={{ xs: 1, sm: 4, md: 8 }} padding={2}>\\n <Paper\\n style={{ padding: 19, backgroundColor: \\"pink\\", color: \\"black\\" }}\\n >\\n {index + 1}\\n </Paper>\\n </Grid>\\n ))}\\n </Grid>\\n </Box>\\n );\\n};\\n\\n
According to the documentation, it’s not possible to set the direction
property to column
or column-reverse
. This is because the MUI designed Grids to divide layouts into columns, not rows.
\\nTo create a vertical layout, use MUI’s Stack
component instead:
return (\\n <Box sx={{ flexGrow: 1 }}>\\n <Grid container spacing={2}>\\n {/*Nested Grid with a stack*/}\\n <Grid size={4}>\\n {/*Create a vertical stack with 3 Papers*/}\\n <Stack spacing={2}>\\n <Paper>Column 1 - Row 1</Paper>\\n <Paper>Column 1 - Row 2</Paper>\\n <Paper>Column 1 - Row 3</Paper>\\n </Stack>\\n </Grid>\\n {/*Create a nested grid*/}\\n <Grid size={8}>\\n <Paper sx={{ height: \\"100%\\", boxSizing: \\"border-box\\" }}>Column 2</Paper>\\n </Grid>\\n </Grid>\\n </Box>\\n);\\n\\n
This is useful in cases where you would want to build a dashboard component for your app.
\\nReal-world web applications frequently require displaying extensive datasets from APIs, often numbering in the thousands. Directly rendering such large component lists to the DOM results in significant performance bottlenecks. To address this, developers employ virtualization techniques using libraries like react-window
with Material UI’s List
component, ensuring smooth rendering and optimal user experience even with massive datasets.
The following example demonstrates how to efficiently display a large list of Nintendo Switch games using this approach:
\\nimport Box from \\"@mui/material/Box\\";\\nimport ListItem from \\"@mui/material/ListItem\\";\\nimport ListItemButton from \\"@mui/material/ListItemButton\\";\\nimport ListItemText from \\"@mui/material/ListItemText\\";\\nimport { useEffect, useState } from \\"react\\";\\nimport { FixedSizeList, ListChildComponentProps } from \\"react-window\\";\\n\\ntype ItemData = {\\n id: number;\\n name: string;\\n};\\n\\nconst ReactWindowDemo: React.FC = () => {\\n const [data, setData] = useState<ItemData[]>([]);\\n const getData = async () => {\\n const resp = await fetch(\\"https://api.sampleapis.com/switch/games\\");\\n const json = await resp.json();\\n setData(json);\\n };\\n useEffect(() => {\\n getData();\\n }, []);\\n\\n function renderRow(props: ListChildComponentProps) {\\n const { index, style } = props;\\n return (\\n <ListItem\\n style={style}\\n key={data[index].id}\\n component=\\"div\\"\\n disablePadding\\n >\\n <ListItemButton>\\n <ListItemText primary={`Name: ${data[index].name}`} />\\n </ListItemButton>\\n </ListItem>\\n );\\n }\\n\\n return (\\n <Box>\\n {data.length && (\\n <FixedSizeList\\n height={700}\\n width={360}\\n itemSize={46}\\n itemCount={200}\\n overscanCount={5}\\n >\\n {renderRow}\\n </FixedSizeList>\\n )}\\n </Box>\\n );\\n};\\n\\n
\\n
Using a border for a Grid
item is simple via the border
property:
<Grid\\n container\\n sx={{\\n \\"--Grid-borderWidth\\": \\"4px\\",\\n borderTop: \\"var(--Grid-borderWidth) solid\\",\\n borderLeft: \\"var(--Grid-borderWidth) solid\\",\\n borderColor: \\"divider\\",\\n \\"& > div\\": {\\n borderRight: \\"var(--Grid-borderWidth) solid\\",\\n borderBottom: \\"var(--Grid-borderWidth) solid\\",\\n borderColor: \\"divider\\",\\n },\\n }}\\n>\\n {/*Further code...*/}\\n</Grid>\\n\\n
In this article, you learned how to build responsive grid layouts via the MUI Grid system. Hopefully, this guide will help you take full advantage of the numerous features we discussed. They’ll come in handy at some point or another when you’re working with the Material Design system.
\\n\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nCode-generating AI has gone from a curiosity to a core tool in a developer’s toolkit. Like any good workplace relationship, it started rocky — we were worried it’d replace us. But it turns out, AI is less “job stealer” and more “helpful coworker.” Sure, it sometimes spits out nonsense, but it also gets a shocking amount right.
\\nIn this post, we’ll break down how AI code generation works, where it’s useful, which tools are leading the pack, and what all this means for the future of software development.
\\nAI code generation is when artificial intelligence writes code for you based on natural language prompts — basically, turning “English” into “JavaScript,” or Python, or whatever you’re working with.
\\nYou give it instructions like “write a function that calculates factorials,” and boom — you get back working code (hopefully).
\\nThe tech behind AI coding tools isn’t magic — but it kind of feels like it.
\\nAt its core, AI code generation uses large language models (LLMs) trained on huge datasets: open-source code, documentation, and natural language. These models learn patterns between human input and code output, so when you give them a prompt, they predict the most likely, most useful code in return.
\\nHere’s a quick breakdown:
\\nAI models like GPT or Claude are trained on vast amounts of data: GitHub repos, code documentation, and even Stack Overflow. They learn both code structure and how that code maps to human-readable instructions.
\\nWhen you enter a prompt (e.g., “Write a JavaScript function to calculate factorials”), the model breaks it into tokens — chunks of text it understands. It then analyzes your request in context, using that training to figure out what you want.
\\nAI doesn’t think like us. Instead, it uses probability to guess what token (i.e., word, symbol, line of code) should come next based on what it’s seen before. So when you say “factorial,” the model knows that recursion or a loop is probably involved, and it generates something like this:
\\nfunction factorial(n) {\\n if (n === 0 || n === 1) {\\n return 1;\\n } else {\\n return n * factorial(n - 1);\\n }\\n }\\n
It doesn’t do that because it “understands” factorials — it’s just really good at remixing examples it’s seen a thousand times before.
\\nOnce the pattern is predicted, the model builds the code token by token. Modern models use attention mechanisms to focus on the most relevant parts of your prompt, making output more accurate and context-aware.
\\nOh, and about those “token limits”; every model has a context window, which caps how much text it can handle at once. Some support 4K tokens, others up to 200K. If you’re working on a massive project, try splitting it up into smaller chunks for better results.
\\nToday’s devs are using AI in a few key ways:
\\nThink GitHub Copilot or Codeium. Start typing for (let i = 0;
, and it finishes the loop before you blink.
Comment something like // fetch user data from API
, and your IDE spits out a full async function, headers, error handling, and all.
Tools like ChatGPT, Claude, Perplexity, and Bolt.new offer chatbot-style interfaces. You ask questions, get code, debug errors, or just vent about that one persistent bug. It’s Stack Overflow meets pair programming.
\\nEach tool adds value differently, but they all aim to save time, reduce boilerplate, and keep devs in flow. Don’t forget that they can even help you out with motivational thoughts when your bug is more serious.
\\nAI coding exploded in 2022, and now the ecosystem’s stacked with options. Here are some of the most popular tools right now:
\\nThe top tools dominating the scene right now are:
\\nWe have an article focusing on the best AI coding tools for 2025, if you’re looking for a deeper dive.
\\nLet’s be real — this stuff is a game-changer.
\\nBut AI code generation isn’t perfect, either. There are major drawbacks. A recent study found that software developers who rely on code-generating AI tech are more likely to introduce security vulnerabilities into the applications they develop. Here are some of the big cons:
\\nSo… is AI going to take our jobs? Not really. History tells us tools don’t kill jobs — they change them.
\\nTake human calculators at NASA. They didn’t vanish when the IBM 7090 arrived — they learned Fortran and adapted. Or think about mowing lawns; the lawnmower didn’t eliminate the job, it just changed how it got done.
\\nThis is essentially what’ll happen if the AI thing isn’t just a bubble. Going forward, you’ll want to either be building these AI systems yourself or working in a field where you complement what AI can do, rather than compete with it. Coders who understand the tools — or help build them — will thrive. Those who don’t? Not so much.
\\nAI isn’t replacing developers anytime soon, but it is changing the job description. It’s less about resisting the shift and more about evolving with it. Code generation is here, and it’s not going anywhere.
\\nSee you on the other side. We’ll probably still be coding, just a lot faster.
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\n<img>
tag\\n <img>
tag for static SVGs\\n <svg>
element\\n React.memo
for static SVG components\\n <title>
and <desc>
\\n SVG, or Scalable Vector Graphics, is a vector graphics image format based on XML. It was developed in the late 90s and was poorly supported until around 2016.
\\nToday, a huge percentage of icon libraries, such as Flaticon, Font Awesome, and Material Icon have full support for SVG. Brands such as X, YouTube, Udacity, and Netflix use SVG for some of their images and icons.
\\nIn this article, we will explore the advances of using SVG over other image formats and various ways to implement SVGs in React applications, including their integration, animation, and usage as React components.
\\nThere are multiple ways to handle SVGs in React. Let’s explore a few:
\\nOne common approach is inline SVG, where the <svg>
element is directly embedded in JSX. This method provides full control over styling, animations, and interactivity. However, it can make the code cluttered, especially when working with complex SVGs.
<img>
tagAnother method is using the <img>
tag to load an external SVG file. This is great for static SVGs like logos that do not require styling or animations. The downside is that SVGs used in <img>
tags cannot be styled with CSS.
For a more React-friendly approach, you can import SVGs as React components using SVGR, a tool that converts SVGs into fully customizable React components. This method is great for dynamic styling, animations, and modifying SVG elements via props.
\\nIf you need flexibility, full control over styling and interactivity, inline SVGs or SVGR are great choices. If simplicity is a priority, using an <img>
tag or component import might be the best option.
Editor’s note: This article was last updated by Emmanuel John in April 2025 to include information on SVGR usage, cover SVG utilization on React frameworks such Next.js, Gatsby, and Refine, and troubleshoot common mistakes when using SVGs in React.
\\nBelow, we’ll explore various ways to use or render this React SVG logo on a webpage. It’s worth noting that Create React App (CRA) has a built-in configuration for handling SVGs. Some of the examples in this article that require modifying the webpack setup apply only to custom React projects using webpack as a bundler.
\\nYou may need different plugins if you don’t use webpack for your custom React project.
\\n<img>
tag for static SVGsIn order to use SVGs or any other image format in the <img>
tag, we have to set up a file loader system in whichever module bundler we’re using. Here, I will show you how to set it up in a few steps if you are already using webpack as your bundler.
If you are using webpack 4, first install the file-loader library with the command $ npm install file-loader --save-dev
. This will install it as a dev dependency.
You can update your webpack configuration file rules with this code:
\\nconst webpack = require(\'webpack\');\\n\\nmodule.exports = {\\n entry: \'./src/index.js\',\\n module: {\\n rules: [\\n //...\\n {\\n test: /\\\\.(png|jp(e*)g|svg|gif)$/,\\n use: [\\n {\\n loader: \'file-loader\',\\n options: {\\n name: \'images/[hash]-[name].[ext]\',\\n },\\n },\\n ],\\n },\\n ],\\n },\\n //...\\n};\\n\\n
However, the file-loader library is deprecated if you are using webpack 5; you can use asset modules instead. With asset modules, you can use asset files in your project setup without installing additional loaders. Update the rules field of your webpack configuration file to include the following:
\\nmodule.exports = {\\n entry: \\"./src/index.js\\",\\n module: {\\n rules: [\\n //...\\n {\\n test: /\\\\.(png|jp(e*)g|svg|gif)$/,\\n type: \\"asset/resource\\",\\n },\\n ],\\n },\\n //...\\n};\\n\\n
Now you can import your SVG and use it as a variable, like this:
\\nimport React from \'react\';\\n{/*images*/}\\nimport ReactLogo from \'./logo.svg\';\\n\\nconst App = () => {\\n return (\\n <div className=\\"App\\">\\n <img src={ReactLogo} alt=\\"React Logo\\" />\\n </div>\\n );\\n}\\nexport default App;\\n\\n
This method of importing SVGs has a disadvantage as it cannot be styled in an img
element, making it suitable for non-customizable SVGs like logos, unlike other methods.
<svg>
elementWith the same webpack settings above, we can use the <svg>
element by copying and pasting the contents of the .svg
file into our code. Here is a sample use case:
import React from \'react\';\\n\\nconst App = () => {\\n return (\\n <div className=\\"App\\">\\n <svg xmlns=\\"http://www.w3.org/2000/svg\\" viewBox=\\"0 0 841.9 595.3\\">\\n <g fill=\\"#61DAFB\\">\\n <path d=\\"M666.3 296.5c0-32.5-40.7-63.3-103.1-82.4 14.4-63.6 8-114.2-20.2-130.4-6.5-3.8-14.1-5.6-22.4-5.6v22.3c4.6 0 8.3.9 11.4 2.6 13.6 7.8 19.5 37.5 14.9 75.7-1.1 9.4-2.9 19.3-5.1 29.4-19.6-4.8-41-8.5-63.5-10.9-13.5-18.5-27.5-35.3-41.6-50 32.6-30.3 63.2-46.9 84-46.9V78c-27.5 0-63.5 19.6-99.9 53.6-36.4-33.8-72.4-53.2-99.9-53.2v22.3c20.7 0 51.4 16.5 84 46.6-14 14.7-28 31.4-41.3 49.9-22.6 2.4-44 6.1-63.6 11-2.3-10-4-19.7-5.2-29-4.7-38.2 1.1-67.9 14.6-75.8 3-1.8 6.9-2.6 11.5-2.6V78.5c-8.4 0-16 1.8-22.6 5.6-28.1 16.2-34.4 66.7-19.9 130.1-62.2 19.2-102.7 49.9-102.7 82.3 0 32.5 40.7 63.3 103.1 82.4-14.4 63.6-8 114.2 20.2 130.4 6.5 3.8 14.1 5.6 22.5 5.6 27.5 0 63.5-19.6 99.9-53.6 36.4 33.8 72.4 53.2 99.9 53.2 8.4 0 16-1.8 22.6-5.6 28.1-16.2 34.4-66.7 19.9-130.1 62-19.1 102.5-49.9 102.5-82.3zm-130.2-66.7c-3.7 12.9-8.3 26.2-13.5 39.5-4.1-8-8.4-16-13.1-24-4.6-8-9.5-15.8-14.4-23.4 14.2 2.1 27.9 4.7 41 7.9zm-45.8 106.5c-7.8 13.5-15.8 26.3-24.1 38.2-14.9 1.3-30 2-45.2 2-15.1 0-30.2-.7-45-1.9-8.3-11.9-16.4-24.6-24.2-38-7.6-13.1-14.5-26.4-20.8-39.8 6.2-13.4 13.2-26.8 20.7-39.9 7.8-13.5 15.8-26.3 24.1-38.2 14.9-1.3 30-2 45.2-2 15.1 0 30.2.7 45 1.9 8.3 11.9 16.4 24.6 24.2 38 7.6 13.1 14.5 26.4 20.8 39.8-6.3 13.4-13.2 26.8-20.7 39.9zm32.3-13c5.4 13.4 10 26.8 13.8 39.8-13.1 3.2-26.9 5.9-41.2 8 4.9-7.7 9.8-15.6 14.4-23.7 4.6-8 8.9-16.1 13-24.1zM421.2 430c-9.3-9.6-18.6-20.3-27.8-32 9 .4 18.2.7 27.5.7 9.4 0 18.7-.2 27.8-.7-9 11.7-18.3 22.4-27.5 32zm-74.4-58.9c-14.2-2.1-27.9-4.7-41-7.9 3.7-12.9 8.3-26.2 13.5-39.5 4.1 8 8.4 16 13.1 24 4.7 8 9.5 15.8 14.4 23.4zM420.7 163c9.3 9.6 18.6 20.3 27.8 32-9-.4-18.2-.7-27.5-.7-9.4 0-18.7.2-27.8.7 9-11.7 18.3-22.4 27.5-32zm-74 58.9c-4.9 7.7-9.8 15.6-14.4 23.7-4.6 8-8.9 16-13 24-5.4-13.4-10-26.8-13.8-39.8 13.1-3.1 26.9-5.8 41.2-7.9zm-90.5 125.2c-35.4-15.1-58.3-34.9-58.3-50.6 0-15.7 22.9-35.6 58.3-50.6 8.6-3.7 18-7 27.7-10.1 5.7 19.6 13.2 40 22.5 60.9-9.2 20.8-16.6 41.1-22.2 60.6-9.9-3.1-19.3-6.5-28-10.2zM310 490c-13.6-7.8-19.5-37.5-14.9-75.7 1.1-9.4 2.9-19.3 5.1-29.4 19.6 4.8 41 8.5 63.5 10.9 13.5 18.5 27.5 35.3 41.6 50-32.6 30.3-63.2 46.9-84 46.9-4.5-.1-8.3-1-11.3-2.7zm237.2-76.2c4.7 38.2-1.1 67.9-14.6 75.8-3 1.8-6.9 2.6-11.5 2.6-20.7 0-51.4-16.5-84-46.6 14-14.7 28-31.4 41.3-49.9 22.6-2.4 44-6.1 63.6-11 2.3 10.1 4.1 19.8 5.2 29.1zm38.5-66.7c-8.6 3.7-18 7-27.7 10.1-5.7-19.6-13.2-40-22.5-60.9 9.2-20.8 16.6-41.1 22.2-60.6 9.9 3.1 19.3 6.5 28.1 10.2 35.4 15.1 58.3 34.9 58.3 50.6-.1 15.7-23 35.6-58.4 50.6zM320.8 78.4z\\"/>\\n <circle cx=\\"420.9\\" cy=\\"296.5\\" r=\\"45.7\\"/>\\n <path d=\\"M520.5 78.1z\\"/>\\n </g>\\n </svg>\\n </div>\\n );\\n}\\nexport default App;\\n\\n
You can likely already see the disadvantages of using this method. When the image is more complex, the SVG file becomes larger, and because SVG is stored in text, we have a whole bunch of text in our code.
\\nSVGs can be imported and used directly as React components in your React code. The image is not loaded as a separate file; rather, it’s rendered along with the HTML. A sample use case would look like this:
\\nimport { ReactComponent as Logo} from \'./logo.svg\';\\nimport \'./App.css\';\\n\\nfunction App() {\\n return (\\n <div className=\\"App\\">\\n <Logo />\\n </div>\\n );\\n}\\nexport default App;\\n\\n
Although this approach is simple to implement, it has some drawbacks. The imported SVG functions as an image element, not a full-fledged React component, and cannot be customized with props. It’s not suitable for complex SVGs with multiple elements or styles.
\\nAnother approach is converting it to a React component before using it in your React application:
\\nconst BarIcon = () => {\\n return (\\n <svg\\n className=\\"w-6 h-6 text-gray-800 dark:text-white\\"\\n aria-hidden=\\"true\\"\\n xmlns=\\"http://www.w3.org/2000/svg\\"\\n fill=\\"none\\"\\n viewBox=\\"0 0 24 24\\">\\n <path\\n stroke=\\"currentColor\\"\\n strokeLinecap=\\"round\\"\\n strokeWidth=\\"2\\"\\n d=\\"M5 7h14M5 12h14M5 17h14\\"\\n />\\n </svg>\\n )\\n}\\n\\nfunction App() {\\n return (\\n <header className=\\"App-header\\">\\n <BarIcon />\\n </header>\\n );\\n}\\nexport default App;\\n\\n
JSX supports the svg
tag, allowing for the direct copy-paste of SVGs into React components without using a bundler. SVGs are in XML format, similar to HTML, and can be converted to JSX syntax. Alternatively, a compiler can be used instead of manually converting:
export default function App() {\\n return (\\n <svg\\n className=\\"w-10 h-10 text-gray-800 dark:text-white\\"\\n aria-hidden=\\"true\\"\\n xmlns=\\"http://www.w3.org/2000/svg\\"\\n fill={fill}\\n width={width}\\n height={height}\\n viewBox=\\"0 0 24 24\\">\\n <path\\n stroke=\\"currentColor\\"\\n strokeLinecap=\\"round\\"\\n strokeWidth=\\"2\\"\\n d=\\"M5 7h14M5 12h14M5 17h14\\"\\n />\\n </svg>\\n);\\n};\\nexport default App;\\n\\n
Inline SVGs offer access to their properties, allowing for customization and style. However, large SVG file sizes may reduce code readability and productivity, so using a PNG or JPEG file is recommended.
\\nSVGR is an awesome tool that converts your SVGs into React components. It now use SVGO v2 for SVG optimization, which no longer supports automatic merging of configurations.
\\nTo set it up, first install the package by running the command $ npm install @svgr/webpack --save-dev
. Then, update your webpack configuration rule to use SVGR for SVGs:
const webpack = require(\'webpack\');\\n\\nmodule.exports = {\\n entry: \'./src/index.js\',\\n module: {\\n rules: [\\n //...\\n {\\n test: /\\\\.svg$/,\\n use: [\'@svgr/webpack\'],\\n },\\n ],\\n },\\n //...\\n};\\n\\n
Now you can import SVG images as React components and use them in your code like so:
\\nimport React from \'react\';\\nimport ReactLogo from \'./logo.svg\';\\n\\nconst App = () => {\\n return (\\n <div className=\\"App\\">\\n <ReactLogo />\\n </div>\\n );\\n}\\nexport default App;\\n\\n
To transform SVG into React components, SVGR applies several complex transformations, starting with optimizing the SVG using SVGO.
\\n\\nIt then transforms HTML into JSX through multiple steps, such as converting the SVG into HAST (HTML AST), then into Babel AST (JSX AST), and further modifying the AST using Babel to rename attributes and adjust values. Next, it wraps the JSX into a React component, converts the Babel AST into code, and finally formats the code using Prettier for a clean and structured output.
\\nData URLs are URLs prefixed with the data:
scheme, which allows content creators to embed small files inline in documents. This approach enables us to use SVG images like an inline element.
How do you achieve this? First, you’ll need an appropriate loader if you are using webpack. For this use case, I’ll use svg-url-loader
. You can add it to your project by running the command $ npm install svg-url-loader --save-dev
.
Then, update the webpack configuration file rules section with the following:
\\nconst webpack = require(\'webpack\');\\n\\nmodule.exports = {\\n entry: \'./src/index.js\',\\n module: {\\n rules: [\\n //...\\n {\\n test: /\\\\.svg$/,\\n use: [\\n {\\n loader: \'svg-url-loader\',\\n options: {\\n limit: 10000,\\n },\\n },\\n ],\\n },\\n ],\\n },\\n //...\\n};\\n\\n
Now you can import your SVG file and use it in your React component like this:
\\nimport ReactLogo from \'./logo.svg\';\\n\\nconst App = () => {\\n return (\\n <div className=\\"App\\">\\n <img src={ReactLogo} alt=\\"React Logo\\" />\\n </div>\\n );\\n}\\n\\n
This usually results in something like this in the DOM:
\\n<img src=\\"data:image/svg+xml,%3csvg...\\" alt=\\"React Logo\\" />\\n\\n
Adding SVG markup directly in your React component increases its file size, making it hard to maintain. Despite this limitation, using inline SVG comes with its advantages. You can easily apply CSS styles to the SVG markup when you embed it inline, compared to using the <img>
tag.
To leverage the benefits of embedding SVGs inline without worrying about the maintainability of your component, you can use the react-svg package. It fetches the SVG asynchronously and embeds it inline.
\\nYou can install it from the npm package registry like so:
\\nnpm install react-svg\\n\\n
You can then import the ReactSVG
component and render it in your React application. The ReactSVG
component takes the URL for your SVG as the value of the src
prop. It also takes several optional props you can look up in the documentation:
import { ReactSVG } from \\"react-svg\\";\\n\\n<ReactSVG src=\\"icon.svg\\" />\\n\\n
As mentioned in the introduction, one of the benefits of using SVGs over other image formats is that SVGs can be animated. You can animate SVGs using CSS or React animation libraries like Framer Motion and React Spring.
\\nA few things to be aware of:
\\n<svg>
element. Here, I would recommend you go with PNG or JPEGsWhile manually converting SVGs to React components is possible, it takes time and is prone to errors. Fortunately, some different techniques and tools make this procedure easier:
\\nsvg
tag can be used to embed SVG code directly within JSX components for simple SVGs. This method works well for static SVGs that don’t need to be dynamically modifiedreact-svg
or react-inlinesvg
for further features and functionalityWhen used as React components, SVGs become excellent tools for creating clean and scalable user interfaces. Effectively managing React SVG icons requires careful consideration for scalability and maintainability, and some strategies include:
\\nHamburgerIcon.js
and NotificationIcon.js
are examples of descriptive names that improve readability and organizationTypeScript allows developers to enforce type safety in React applications, including handling SVG elements and their properties. Adhering to best practices for typing SVG properties is important for preventing runtime errors and ensuring code reliability. This section will explore these best practices and provide examples of passing SVG elements as props to other components.
\\nOne technique for typing SVG properties in TypeScript to ensure type safety includes using the SVGProps
type that React provides to type SVG-related props correctly. This guards against type errors and guarantees that the right props are supplied to SVG elements:
Another technique involves defining a custom interface that encompasses all relevant SVG properties, extending built-in types for specific attributes:
\\ninterface SvgProps {\\n fill?: string;\\n stroke?: string;\\n width?: number;\\n height?: number;\\n // Add other relevant SVG attributes here\\n}\\n\\n
Finally, consider exploring third-party libraries like react-svg or SVGR for pre-defined type definitions for SVG manipulation within React.
\\nNow, let’s see how to provide SVG elements as props to other components in TypeScript:
\\n// Define a component that accepts SVG element as a prop\\ninterface Props {\\n svgElement: React.ReactElement<React.SVGProps<SVGElement>>;\\n}\\nconst SVGWrapper: React.FC<Props> = ({ svgElement }) => (\\n <div>\\n {svgElement}\\n </div>\\n);\\n\\n
Here we create an interface Props for the SVGWrapper
component that accepts svgElement
props of the type React.ReactElement<React.SVGProps<SVGElement>>
. The SVGWrapper
component renders the svgElement
that it gets.
In the App
component, we can now pass instances of our SVG icons and components as svgElement
props to SVGWrapper
:
import { BarIcon } from \'./BarIcon\';\\n\\nexport default function App() {\\n return (\\n <div className=\\"App\\">\\n <SVGWrapper svgElement={<BarIcon fill=\\"#fff\\" className=\\"w-10 h-10 text-gray-800 dark:text-white\\" />} />\\n </div>\\n );\\n}\\n\\n
This way, you can pass SVG elements as props to other components in TypeScript.
\\ninterface BarProps {\\n fill?: string;\\n width?: number;\\n height?: number;\\n className?: string;\\n}\\nexport const BarIcon = (\\n {\\n fill,\\n width = 20,\\n height = 20,\\n className,\\n }: BarProps\\n) => {\\n return (\\n <svg\\n className={className}\\n aria-hidden=\\"true\\"\\n xmlns=\\"http://www.w3.org/2000/svg\\"\\n fill={fill}\\n width={width}\\n height={height}\\n viewBox=\\"0 0 24 24\\">\\n <path\\n stroke=\\"currentColor\\"\\n strokeLinecap=\\"round\\"\\n strokeWidth=\\"2\\"\\n d=\\"M5 7h14M5 12h14M5 17h14\\"\\n />\\n </svg>\\n )\\n}\\n\\n
The component defines an interface BarProps
, which allows for the specification of optional props like width, height, and fill color, and then renders the SVG code using the path element.
Using this method, the SVG code is passed directly as a string prop. For instance, consider this:
\\nimport React from \\"react\\"\\n\\ninterface ButtonProps {\\n onClick: () => void;\\n svg: string; // Prop for the SVG code\\n}\\nconst Button: React.FC<ButtonProps> = ({ onClick, svg }) => (\\n <button onClick={onClick}>\\n <div dangerouslySetInnerHTML={{ __html: svg }} />\\n </button>\\n);\\n\\nconst App = () => {\\n const svgString = \\"<svg width=\'100\' height=\'100\'><circle cx=\'50\' cy=\'50\' r=\'40\' stroke=\'black\' strokeWidth=\'3\' fill=\'red\' /></svg>\\";\\n\\n return (\\n <div>\\n <Button\\n onClick={() => console.log(\'Svg\')}\\n svg={svgString} \\n />\\n </div>\\n );\\n};\\n\\nexport default App;\\n\\n
This example demonstrates using the ButtonProps
interface to define props for the Button
, which takes a string property svg
and onClick
as a prop. The component renders the SVG using dangerouslySetInnerHTML
, assigning it to the __html
key, bypassing React’s XSS protection.
It’s crucial to ensure the SVG string is safe to render, as it could pose a security risk if it contains untrusted content. Proper use of dangerouslySetInnerHTML
can prevent cross-site scripting attacks. In App
, we define an SVG string and pass it to Button
as a prop.
To ensure reusability and maintain a single source of truth for SVG code, create a separate component for multiple uses of the same SVG with different styles. If it is only needed once or has a simple SVG, pass the SVG string as a prop.
\\nWhile SVGs are lightweight and scalable, these techniques can improve performance, especially in complex graphics or animations:
\\nReact.memo
for static SVG componentsIf an SVG component doesn’t change often, wrap it with React.memo()
to prevent unnecessary re-renders:
import React from \'react\';\\n\\nconst Logo = React.memo(() => (\\n <svg width=\\"100\\" height=\\"100\\">\\n <circle cx=\\"50\\" cy=\\"50\\" r=\\"40\\" stroke=\\"black\\" strokeWidth=\\"3\\" fill=\\"red\\" />\\n </svg>\\n));\\n\\nexport default Logo;\\n\\n
SVGR transforms SVGs to optimized and lightweight React components. You can also use SVGO (SVG Optimizer) to reduce SVG file sizes by removing redundant attributes, metadata, and comments.
\\nIf your app includes multiple or complex SVGs, lazy loading helps defer their loading until needed, reducing initial page load time:
\\nimport React, { lazy, Suspense } from \'react\';\\n\\nconst LazySVG = lazy(() => import(\'./LargeSVG\'));\\n\\nconst App = () => (\\n <Suspense fallback={<div>Loading...</div>}>\\n <LazySVG />\\n </Suspense>\\n);\\n\\n
<title>
and <desc>
For inline SVGs, use <title>
and <desc>
elements to provide additional context and role=\\"img\\"
and aria-label
attributes for meaningful descriptions. This will improve the user experience for screen readers and assistive technologies
React frameworks, like Next.js, Gatsby, and Refine, offer various ways to handle SVGs efficiently, with some methods being similar across them.
\\nHandling SVGs in Gatsby projects can be done in multiple ways. Using gatsby-plugin-svgr
to import SVGs as React components is the most efficient way, as it allows better styling and animation of SVGs.
To use SVGs as React components, install gatsby-plugin-svgr
:
npm install gatsby-plugin-svgr\\n\\n
Then configure it by adding the plugin to gatsby-config.js
:
module.exports = {\\n plugins: [\'gatsby-plugin-svgr\'],\\n};\\n\\n
Now you can use it in your components like this:
\\nimport React from \'react\';\\nimport Logo from \'../assets/logo.svg\'; // SVG as a React component\\n\\nconst Header = () => (\\n <header>\\n <Logo width={100} height={100} />\\n <h1>About us</h1>\\n </header>\\n);\\n\\nexport default Header;\\n\\n
You can also use SVGs as static image files with the <img>
tag, or inline SVG code directly for full customization.
To display an SVG as a static image, you can import it directly:
\\nimport React from \'react\';\\nimport logo from \'../assets/logo.svg\';\\n\\nconst Header = () => (\\n <header>\\n <img src={logo} alt=\\"Logo\\" />\\n </header>\\n);\\n\\nexport default Header;\\n\\n
Next.js supports importing SVGs as React components using @svgr/webpack
.
To use SVGs as React components in Next.js, install @svgr/webpack
:
npm install @svgr/webpack\\n\\n
Next, update next.config.js
:
const nextConfig = {\\n webpack(config) {\\n config.module.rules.push({\\n test: /\\\\.svg$/,\\n use: [\'@svgr/webpack\'],\\n });\\n return config;\\n },\\n};\\n\\nmodule.exports = nextConfig;\\n\\n
Now you can use it in your components like this:
\\nimport React from \'react\';\\nimport Logo from \'../assets/logo.svg\'; // SVG as a React component\\n\\nconst Header = () => (\\n <header>\\n <Logo width={100} height={100} />\\n <h1>My Next.js Site</h1>\\n </header>\\n);\\n\\nexport default Header;\\n\\n
You can also use react-svgr playground to generate React components from your SVG code by pasting the SVG code in the playground, then copying the generated React components into your Next.js project.
\\nHandling SVGs in Refine projects follows a similar approach to handling SVGs in React. Refine is a React-based framework for building CRUD-heavy web applications like admin panels, dashboards, and internal tools easily.
\\nSVGO optimizes SVG files by removing metadata, comments, and unnecessary attributes, reducing file size while maintaining visual quality. It minifies paths, making the SVG more efficient and lightweight.
\\nYou can optimize SVGs in any React frameworks with SVGO:
\\nnpm install svgo\\nsvgo input.svg -o output.svg\\n\\n
Or use an online SVG optimizer like SVGOMG.
\\nHere are some common mistakes to avoid when using SVGs in React:
\\n<img>
tags for dynamic or styled SVGs — If you need to customize an SVG with CSS or React props, it’s better to use inline SVGs or import them as componentsYou’re probably more familiar with image formats like JPEG, GIFs, and PNG than you are with SVG. However, there are many reasons why you’d want to use SVG over these other formats:
\\nfill
instead of color
. You can also style SVG with CSS. Likewise, because SVGs are DOM-like, they can be created, edited, and animated with any text editorSVGs make up a significant proportion of images on the web today. As highlighted above, SVGs have smaller file sizes than other image formats. You can resize them without losing image quality, and they are animatable.
\\nThough its usage is straightforward with HTML, you need additional tools and configuration to start using SVG in frontend frameworks like React. Most of the popular React project starter toolsets, like Create React App, Vite, and Astro, come with out-of-the-box configurations for handling static assets such as images, including SVGs.
\\nAs mentioned in this article, Create React App uses the SVGR webpack loader, @svgr/webpack
, under the hood. Therefore, you can import an SVG with an import
statement and render it in your application. You can also inline the SVG markup. However, rendering SVGs inline can make your components hard to maintain.
For a custom React project that uses webpack as a bundler, you can configure the @svgr/webpack
loader to load SVGs similar to Create React App.
As you use SVGs, it is worth mentioning that complex images can have large SVG files, especially if you want to inline the SVG. Though most popular modern browsers fully support SVGs, some browsers, especially mobile browsers, do not have full support for certain SVG features.
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nTanStack Start is a new full-stack React framework built on Vite, Nitro, and TanStack Router. The framework is packed with features like Server Side Rendering (SSR), Server functions, API routes, and so much more.
\\nAlthough TanStack Start’s development is still in its early stages, the documentation shows it is already safe to use in production.
\\nWith the benefits it presents, TanStack Start might seem like the right framework for your next project. However, choosing it comes with some downsides, including a relatively small community and less comprehensive documentation, among a few others.
\\nThis post explains how TanStack Start works and features a tutorial on how to build a simple developer portfolio — styled with Tailwind CSS — in the framework. We’ll also summarize the pros and cons of choosing TanStack Start for a project. To follow along, you need knowledge of React and TypeScript.
\\nHere are some notable features of TanStack Start:
\\nBecause TanStack Start is a full-stack framework, using it guarantees type safety on both the client and server sections of an application. This also makes it easy for both sections to have shared types (like form validation types).
\\nThese are functions that can be invoked from either the client or the server, but will always run on the server. In TanStack Start, one can use server functions to fetch data for routes, query a database, or perform any other actions that should run on a server. It is important to note that TanStack Start allows server functions to be further extended with middleware.
\\nWith API routes, a developer can create backend APIs for applications. This means that there’s mostly no need for a separate backend server when working with TanStack Start. API routes use a file-based convention and are saved inside the /app/routes/api
directory of a TanStack Start project.
TanStack Start can render applications using both SSR and static prerendering. SSR and prerendering lead to faster page loads, which can improve user experience and SEO.
\\nTanStack Router (which is a popular alternative to React Router) handles the routing in TanStack Start. The routing library has important features such as typesafe navigation, data loading with SWR caching, and routing middleware, to mention but a few. Using TanStack Start means adopting these benefits as well.
\\nBecause Vite powers TanStack Start, the development server of the framework loads quickly. Similarly, the hot module replacement works instantly. With TanStack Start, the developer maximizes time spent building a project.
\\nAt the time of writing, there are two different ways of installing TanStack Start. You can either clone a template using degit
, or set it up from scratch. This section of the article shows how to set up TanStack Start from scratch using its individual dependencies and files. In addition, it explains how the framework works with each dependency/file.
First, create a folder for the TanStack Start project, then cd
into that folder:
mkdir tanstack-app\\ncd tanstack-app\\n\\n
After that, initialize npm for the project:
\\nnpm init -y\\n\\n
The TanStack Start documentation highly recommends building with TypeScript, so install it as a dev dependency:
\\nnpm install --save-dev typescript\\n\\n
Create a configuration file for the TypeScript compiler (tsconfig.json
) in the root folder of the project. The TanStack documentation suggests the configuration file has at least the following options:
//tsconfig.json\\n{\\n \\"compilerOptions\\": {\\n \\"jsx\\": \\"react-jsx\\",\\n \\"moduleResolution\\": \\"Bundler\\",\\n \\"module\\": \\"ESNext\\",\\n \\"target\\": \\"ES2022\\",\\n \\"skipLibCheck\\": true,\\n \\"strictNullChecks\\": true,\\n }\\n}\\n\\n
Next, install the React and React DOM npm packages. They serve as the rendering engine for the user interface. Also, install their types for type safety while using the packages:
\\nnpm install react react-dom \\nnpm install --save-dev @types/react @types/react-dom\\n\\n
Next, install the Vite plugin for React, together with vite-tsconfig-paths
. The latter is a Vite plugin that instantly resolves path aliases:
npm install --save-dev @vitejs/plugin-react vite-tsconfig-paths\\n\\n
To wrap up the installations, add npm packages for TanStack Start, TanStack Router, and Vinxi. Vinxi is a tool used to build full-stack web applications (and even opinionated full-stack frameworks) with Vite. It is an important foundation of the TanStack Start framework:
\\nThe TanStack Start team has promised to develop a custom Vite plugin to replace Vinxi in the future. However, TanStack Start still relies heavily on it for now.
\\nAfter the installations, open the package.json
file and add the following scripts:
// package.json\\n{\\n // ...\\n \\"type\\": \\"module\\",\\n \\"scripts\\": {\\n \\"dev\\": \\"vinxi dev\\",\\n \\"build\\": \\"vinxi build\\",\\n \\"start\\": \\"vinxi start\\"\\n }\\n}\\n\\n
These scripts instruct the framework to use Vinxi for starting up a development server, bundling production builds, and serving the production build. Make sure to also set \\"type\\"
to \\"module\\"
so that Vite can properly transpile the React code for a browser.
In the root directory, create a file app.config.ts
. This config file is for initializing installed Vite plugins. Use the file to initialize the vite-tsconfig-paths
plugin:
// app.config.ts\\nimport { defineConfig } from \'@tanstack/react-start/config\';\\nimport tsConfigPaths from \'vite-tsconfig-paths\';\\n\\nexport default defineConfig({\\n vite: {\\n plugins: [\\n tsConfigPaths({\\n projects: [\'./tsconfig.json\'],\\n }),\\n ],\\n },\\n})\\n\\n
With the app config done, set up the framework’s file structure. Use the file tree below as a map to create files in appropriate locations:
\\n.\\n├── app/\\n│ ├── routes/\\n│ │ └── `__root.tsx`\\n│ ├── `client.tsx`\\n│ ├── `router.tsx`\\n│ ├── `routeTree.gen.ts`\\n│ └── `ssr.tsx`\\n├── `.gitignore`\\n├── `app.config.ts`\\n├── `package.json`\\n└── `tsconfig.json`\\n\\n
The app/router.tsx
file configures TanStack Router. It imports a route tree (which is automatically generated) and allows control for options like scroll restoration. Add the following to the file and save it:
// app/router.tsx\\nimport { createRouter as createTanStackRouter } from \'@tanstack/react-router\';\\nimport { routeTree } from \'./routeTree.gen\';\\n\\nexport function createRouter() {\\n const router = createTanStackRouter({\\n routeTree,\\n scrollRestoration: true,\\n });\\n return router;\\n}\\n\\ndeclare module \'@tanstack/react-router\' {\\n interface Register {\\n router: ReturnType<typeof createRouter>;\\n }\\n}\\n\\n
In the app/ssr.tsx
file, import the newly created createRouter
function, alongside other server utilities from @tanstack/react-start
. This file allows the framework to handle SSR, and properly serve to the client whatever route a user requests:
// app/ssr.tsx\\nimport {\\n createStartHandler,\\n defaultStreamHandler,\\n} from \'@tanstack/react-start/server\';\\nimport { getRouterManifest } from \'@tanstack/react-start/router-manifest\';\\n\\nimport { createRouter } from \'./router\';\\n\\nexport default createStartHandler({\\n createRouter,\\n getRouterManifest,\\n})(defaultStreamHandler);\\n\\n
Likewise, import the createRouter
function into the app/client.tsx
. This file is the client entry point and handles any functionality related to client-side routing. It is responsible for hydrating the client-side after the user gets a resolved route from the server:
// app/client.tsx\\n/// <reference types=\\"vinxi/types/client\\" />\\nimport { hydrateRoot } from \'react-dom/client\';\\nimport { StartClient } from \'@tanstack/react-start\';\\nimport { createRouter } from \'./router\';\\n\\nconst router = createRouter();\\n\\nhydrateRoot(document, <StartClient router={router} />);\\n\\n
With all of that done, open the __root.tsx
file to set up the root route of the application. This is a route that is always rendered, so it’s a good place to set up global client configurations like default HTML meta tags, and importing a Tailwind CSS compiled file. Add the following to the file:
// app/routes/__root.tsx\\nimport {\\n Outlet,\\n createRootRoute,\\n HeadContent,\\n Scripts,\\n} from \'@tanstack/react-router\';\\nimport type { ReactNode } from \'react\';\\n\\nexport const Route = createRootRoute({\\n head: () => ({\\n meta: [\\n {\\n charSet: \'utf-8\',\\n },\\n {\\n name: \'viewport\',\\n content: \'width=device-width, initial-scale=1\',\\n },\\n {\\n title: \'TanStack Start Starter\',\\n },\\n ],\\n }),\\n component: RootComponent,\\n});\\n\\nfunction RootComponent() {\\n return (\\n <RootDocument>\\n <Outlet />\\n </RootDocument>\\n );\\n}\\n\\nfunction RootDocument({ children }: Readonly<{ children: ReactNode }>) {\\n return (\\n <html>\\n <head>\\n <HeadContent />\\n </head>\\n <body>\\n {children}\\n <Scripts />\\n </body>\\n </html>\\n );\\n}\\n\\n
Next, create an index route using the createFileRoute
function. Because TanStack Router works with file-based routing, every route needs a new file (inside the app/rooute
directory).
Create the file app/routes/index.ts
as your first route. For the purpose of illustration, this route will display a h1
that says “Hello, World!”:
// app/routes/index.tsx\\nimport { createFileRoute } from \'@tanstack/react-router\';\\n\\nexport const Route = createFileRoute(\'/\')({\\n component: Home,\\n});\\n\\nfunction Home() {\\n return <h1>Hello, World!</h1>;\\n}\\n\\n
Finally, start the development server from the CLI:
\\nnpm run dev\\n\\n
This starts a localhost server with the address: http://localhost:3000
. Open that URL in the browser. If you followed the above steps accurately, you should have this output:
With that, you have created a TanStack Start project from scratch. You imported the necessary libraries and now have a better idea of how the framework uses them.
\\nMake sure to exclude node_modules
and any .env
files from being tracked by Git:
# .gitignore\\nnode_modules/\\n.env\\n\\n
This section is a tutorial that shows how to build a project with TanStack Start. The project is a simple developer portfolio. It showcases the use cases of TanStack Start’s features such as routing, SSR, data loading with route loaders, client-side navigation, and server functions. The final source code of the project can be found here on GitHub.
\\nTo use Tailwind CSS to style a TanStack Start project, install Tailwind and its Vite plugin:
\\nnpm install tailwindcss @tailwindcss/vite\\n\\n
Initialize the plugin in app.config.ts
(the application’s config file):
// ...\\nimport tailwindcss from \'@tailwindcss/vite\';\\nexport default defineConfig({\\n vite: {\\n // ...\\n tailwindcss(),\\n ],\\n },\\n});\\n\\n
Next, create a file app/styles/app.css
. This file will be the main stylesheet of the application. Vite will send compiled Tailwind styles of the project into this file. Inside the file, add the following:
/* app/styles/app.css */\\n@import \'tailwindcss\';\\n\\n
Finally, import the CSS file into the root route of the application. Link to the stylesheet in the head
tag of the application. That way, every part of the application will have access to the compiled Tailwind styles:
//...\\nimport appCSS from \'../styles/app.css?url\';\\n\\nexport const Route = createRootRoute({\\n head: () => ({\\n // ...\\n links: [{ rel: \'stylesheet\', href: appCSS }],\\n })\\n // ...\\n})\\n\\n// ...\\n\\n
Now, you can use Tailwind classes in the project.
\\nThis project uses a couple of icons from the React Icons library, so install the react-icons
npm package:
npm install react-icons\\n\\n
Create a new folder app/components
for storing components of the application. The first component to create is a header that will be present in every route of the application. Create a file Header.tsx
and add the following:
// app/components/Header.tsx\\nimport { Link } from \'@tanstack/react-router\';\\n\\nconst Header = () => {\\n return (\\n <header className=\'bg-gray-100 flex justify-between px-8 py-3 items-center\'>\\n <h1 className=\'text-2xl\'>\\n <Link to=\'/\'>Portfolio</Link>\\n </h1>\\n <ul className=\'flex gap-5 text-blue-900\'>\\n <li>\\n <Link to=\'/\'>Home</Link>\\n </li>\\n <li>\\n <Link to=\'/projects\'>Projects</Link>\\n </li>\\n <li>\\n <Link to=\'/contact\'>Contact</Link>\\n </li>\\n </ul>\\n </header>\\n );\\n};\\n\\nexport default Header;\\n\\n
Notice how the above snippet uses the <Link>
component from TanStack Router. <Link>
is used to create internal links in a TanStack Start project. Because the Header
component should be present in every route, import it into the root route app/routes/__root.tsx
. Notice the <main>
tag and a few Tailwind styles added to the root component for styling and semantic purposes:
// app/routes/__root.tsx\\n// ...\\nimport Header from \'../components/Header\';\\n\\n// ...\\n\\nfunction RootComponent() {\\n return (\\n <RootDocument>\\n <Header />\\n <main className=\'max-w-4xl mx-auto pt-10\'>\\n <Outlet />\\n </main>\\n </RootDocument>\\n );\\n}\\n// ...\\n\\n
Create a new file inside the component folder called Hero.tsx
. The file will contain a hero component for the home page of the portfolio website:
// app/components/Hero.tsx\\nimport { FaXTwitter, FaLinkedinIn, FaGithub } from \'react-icons/fa6\';\\n\\nconst Hero = () => {\\n return (\\n <div className=\'bg-blue-900 text-gray-50 py-10 px-15 flex gap-15 rounded-2xl items-center\'>\\n <div>\\n <img\\n src=\'https://avatars.githubusercontent.com/u/58449038?v=4\'\\n alt=\'Profile Picture\'\\n className=\'h-50 rounded-full\'\\n />\\n </div>\\n <div>\\n <span className=\'text-3xl mb-3 block\'>Amazing Enyichi Agu</span>\\n <p className=\'mb-5\'>generic and easily forgettable developer bio</p>\\n <div className=\'text-2xl flex gap-3\'>\\n <a href=\'#\'>\\n <FaGithub />\\n </a>\\n <a href=\'#\'>\\n <FaXTwitter />\\n </a>\\n <a href=\'#\'>\\n <FaLinkedinIn />\\n </a>\\n </div>\\n </div>\\n </div>\\n );\\n};\\n\\nexport default Hero;\\n\\n
Create another component file SkillBox.tsx
. Inside the file, add the following:
// app/components/SkillBox.tsx\\ninterface SkillBoxProps {\\n children?: React.ReactNode;\\n}\\n\\nconst SkillBox = ({ children }: SkillBoxProps) => {\\n return (\\n <span className=\'px-4 py-2 text-blue-800 bg-blue-50 rounded\'>\\n {children}\\n </span>\\n );\\n};\\n\\nexport default SkillBox;\\n\\n
After you have created those components, navigate to the index route (index.tsx
) and add the following markup:
// app/routes/index.tsx\\nimport { createFileRoute } from \'@tanstack/react-router\';\\nimport Hero from \'../components/Hero\';\\nimport SkillBox from \'../components/SkillBox\';\\n\\nexport const Route = createFileRoute(\'/\')({\\n component: Home,\\n});\\n\\nfunction Home() {\\n return (\\n <>\\n <Hero />\\n <div className=\'mt-10\'>\\n <h2 className=\'text-2xl\'>Languages</h2>\\n <div className=\'mt-2.5 flex gap-3\'>\\n <SkillBox>HTML</SkillBox>\\n <SkillBox>CSS</SkillBox>\\n <SkillBox>JavaScript</SkillBox>\\n <SkillBox>TypeScript</SkillBox>\\n </div>\\n </div>\\n <div className=\'mt-10\'>\\n <h2 className=\'text-2xl\'>Tools</h2>\\n <div className=\'mt-2.5 flex gap-3\'>\\n <SkillBox>React</SkillBox>\\n <SkillBox>GraphQL</SkillBox>\\n <SkillBox>Node.js</SkillBox>\\n <SkillBox>Socket.io</SkillBox>\\n <SkillBox>Next.js/Remix</SkillBox>\\n </div>\\n </div>\\n </>\\n );\\n}\\n\\n
This creates a simple introduction page with a hero and lists a few developer skills. After saving all the files, the output on the browser should look like this:
\\nThe projects page is a simple page that displays some projects of the owner of the portfolio. It will make use of TanStack Start’s server function and TanStack Router’s loader. The loader displays public GitHub repositories of the author.
\\n\\nBut first, create a component called ProjectCard.tsx
. This component is a card for every listed project in the portfolio:
// app/components/ProjectCard.tsx\\nimport { FaCodeFork, FaRegStar } from \'react-icons/fa6\';\\n\\ninterface ProjectCardProps {\\n url: string;\\n projectName: string;\\n language: string;\\n stars: number;\\n forks: number;\\n}\\n\\nconst ProjectCard = (props: ProjectCardProps) => {\\n return (\\n <a\\n className=\'px-6 py-4 rounded-md bg-green-50 shadow mb-5 block\'\\n href={props.url}\\n >\\n <div className=\'flex justify-between mb-2\'>\\n <span>{props.projectName}</span>\\n <div className=\'flex gap-3\'>\\n <span>\\n {props.stars} <FaRegStar className=\'inline\' />\\n </span>\\n <span>\\n {props.forks} <FaCodeFork className=\'inline\' />\\n </span>\\n </div>\\n </div>\\n <div>\\n <span className=\'text-sm bg-blue-800 text-gray-50 px-1 py-0.5\'>\\n {props.language}\\n </span>\\n </div>\\n </a>\\n );\\n};\\n\\nexport default ProjectCard;\\n\\n
Next, create the project route with a server function. The route will load with whatever data the server function returns:
\\n// app/routes/projects.tsx\\nimport { createServerFn } from \'@tanstack/react-start\';\\nimport { createFileRoute } from \'@tanstack/react-router\';\\nimport ProjectCard from \'../components/ProjectCard\';\\n\\ninterface Project {\\n full_name: string;\\n html_url: string;\\n language: string;\\n stargazers_count: number;\\n forks: number;\\n}\\n\\nconst getProjects = createServerFn({\\n method: \'GET\',\\n}).handler(async () => {\\n const res = await fetch(\\n \'https://api.github.com/users/enyichiaagu/repos?sort=updated&per_page=5\',\\n {\\n headers: {\\n \'X-GitHub-Api-Version\': \'2022-11-28\',\\n accept: \'application/vnd.github+json\',\\n },\\n }\\n );\\n return res.json();\\n});\\n\\nexport const Route = createFileRoute(\'/projects\')({\\n component: Projects,\\n loader: () => getProjects(),\\n});\\n\\nfunction Projects() {\\n const projects: Project[] = Route.useLoaderData();\\n\\n return (\\n <>\\n <h2 className=\'text-2xl\'>Projects</h2>\\n <div className=\'mt-2.5\'>\\n {projects.map(\\n (\\n { full_name, html_url, language, stargazers_count, forks },\\n index\\n ) => (\\n <ProjectCard\\n projectName={full_name}\\n url={html_url}\\n language={language}\\n stars={stargazers_count}\\n forks={forks}\\n key={index}\\n />\\n )\\n )}\\n </div>\\n </>\\n );\\n}\\n\\n
Now, if the client navigates to /projects
, the result should look like this:
Using Server functions means that the fetch()
operation happens in the server, which potentially leads to faster load times.
The final route of the project is a contact page. This page features a form that a visitor of the portfolio website can use to send the owner an email. This route uses TanStack Start’s server function to send the email with the help of Nodemailer.
\\nIn order to implement this functionality properly, you first need to install the Nodemailer library. Nodemailer allows a Node.js program to send an email:
\\nnpm install nodemailer\\nnpm install --save-dev @types/nodemailer\\n\\n
After installing nodemailer
, set up your credentials for sending the email with Nodemailer. Here is a useful resource to quickly set it up.
Next, create a .env
file to store those credentials as environment variables. You mostly only need an email and a password. Prefix the variables with VITE_
as that is the convention when working with Vite. Here is an example:
# .env\\nVITE_EMAIL_ADDRESS=XXXXXX\\nVITE_EMAIL_PASSWORD=XXXXXX\\n\\n
Finally, set up the contact page:
\\n// app/routes/contact.tsx\\n// Imports\\nimport { useState } from \'react\';\\nimport { createServerFn } from \'@tanstack/react-start\';\\nimport { createFileRoute } from \'@tanstack/react-router\';\\nimport nodemailer from \'nodemailer\';\\nimport { FaCheck } from \'react-icons/fa\';\\n\\n// Defining the contact route\\nexport const Route = createFileRoute(\'/contact\')({\\n component: Contact,\\n});\\n\\n// Creating the transporter object for nodemailer\\nconst transporter = nodemailer.createTransport({\\n host: \'smtp.gmail.com\',\\n secure: true,\\n port: 465,\\n auth: {\\n user: import.meta.env.VITE_EMAIL_ADDRESS,\\n pass: import.meta.env.VITE_EMAIL_PASSWORD,\\n },\\n});\\n\\n// Function that uses the nodemailer transporter to send the email\\nconst sendEmailMessage = async ({ email, message }) => {\\n const res = await transporter.sendMail({\\n from: import.meta.env.VITE_EMAIL_ADDRESS,\\n to: import.meta.env.VITE_EMAIL_ADDRESS,\\n subject: `Message from ${email}, sent from Portfolio Website`,\\n text: message,\\n replyTo: email,\\n });\\n return res;\\n};\\n\\n// Server function that calls that validates the input and calls the `sendEmailMessage` function\\nconst submitForm = createServerFn({ method: \'POST\' })\\n .validator((data: FormData) => {\\n const email = data.get(\'email\');\\n const message = data.get(\'message\');\\n if (!email || !message) {\\n throw new Error(\'Email and Message are required\');\\n }\\n return { email: email.toString(), message: message.toString() };\\n })\\n .handler(async (ctx) => {\\n return await sendEmailMessage(ctx.data);\\n });\\n\\n// JSX for contact page\\nfunction Contact() {\\n const [isSuccess, setIsSuccess] = useState<boolean>(false);\\n return (\\n <>\\n <p className=\'text-2xl\'>Contact Me</p>\\n {isSuccess && (\\n <div className=\'bg-green-50 text-green-900 px-6 py-3 rounded w-md mt-5\'>\\n <FaCheck className=\'inline\' /> Email Sent Successfully\\n </div>\\n )}\\n <form\\n method=\'post\'\\n className=\'mt-5\'\\n onSubmit={async (event: React.FormEvent<HTMLFormElement>) => {\\n event.preventDefault();\\n const form: HTMLFormElement = event.currentTarget;\\n const formData = new FormData(form);\\n await submitForm({ data: formData });\\n setIsSuccess(true);\\n return form.reset();\\n }}\\n >\\n <div className=\'mb-2\'>\\n <label htmlFor=\'email\'>Email</label>\\n <br />\\n <input\\n type=\'email\'\\n name=\'email\'\\n id=\'email\'\\n required\\n className=\'border border-gray-400 w-md px-3 py-1.5\'\\n />\\n </div>\\n <div className=\'mb-2\'>\\n <label htmlFor=\'message\'>Message</label>\\n <br />\\n <textarea\\n name=\'message\'\\n id=\'message\'\\n placeholder=\'Write me a message\'\\n required\\n className=\'border border-gray-400 w-md px-3 py-1.5 h-50\'\\n ></textarea>\\n </div>\\n <button className=\'bg-blue-900 text-gray-50 px-4 py-2 rounded\'>\\n Send\\n </button>\\n </form>\\n </>\\n );\\n}\\n\\n
Now, one can send emails from the portfolio’s contact page:
There is no doubt that TanStack Start is a promising new framework — and paradigm — for building full-stack React apps. It has many features and a strong team that has worked on other robust libraries behind it.
\\nHowever, there are also a few downsides to using this framework, especially in its current state. Listed below are the pros and cons of the Tanstack Start framework.
\\nRegardless, TanStack Start is a great framework, and most of the cons exist because the framework is still new to the landscape.
\\nThis article is just an introduction to the TanStack framework. The documentation goes into detail in explaining the numerous capabilities of the framework. But with this, you should have a good understanding of how the framework works and how to use it.
\\nAs mentioned earlier, Tanstack Start will roll out new features in the future and has huge potential. With what the team has accomplished so far, it has no doubt earned a spot as a strong alternative to Next.js and Remix/React Router.
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nRecord
type in TypeScript?\\n Record
vs. other key-value mappings\\n Record
vs. plain objects\\n Record
vs. Map
\\n Record
vs. indexed types\\n Record
\\n Record
data\\n forEach
\\n for...in
\\n Object.keys()
\\n Object.values()
\\n Object.entries()
\\n Pick
type with Record
for selective type mapping\\n Partial
with Record
\\n Record
\\n Record
\\n In TypeScript, the Record
type simply allows you to define dictionary-like objects present in languages like Python. It is a key-value pair structure, with a fixed type for the keys and a generic type for the values as well. Here is the internal definition of the Record
type:
type Record<K extends string | number | symbol, T> = { [P in K]: T; }\\n\\n
In this article, we’ll explore the Record
type in TypeScript to better understand what it is and how it works. We’ll also see how to use it to handle enumeration, and how to use it with generics to understand the properties of the stored value when writing reusable code.
Record
type in TypeScript?The Record<Keys, Type>
is a utility type in TypeScript that helps define objects with specific key-value pairs. It creates an object type where the property keys are of type Keys
, and the values are of type Type
. This is particularly useful when you need to ensure that an object has a specific structure with predefined keys and value types.
In practical terms, Record
is often used to represent collections of data, such as API responses, configuration objects, or dictionaries. For example, Record<string, number>
represents an object where all keys are strings and all values are numbers, while Record<\'id\' | \'name\' | \'age\', string>
represents an object that must have exactly the properties id
, name
, and age
, all with string values.
The primary benefit of using Record
is type safety – TypeScript will verify that all required keys are present and that all values match their expected types, catching potential errors at compile time rather than runtime.
Editor’s note: This article was last updated by Ikeh Akinyemi in April 2025 to align with updates to TypeScript 5.8.3, add comparisons between Record
and other object-mapping approaches, and discuss performance and real-world applications of Record
Record
vs. other key-value mappingsWhen working with key-value pairs in TypeScript, several options are available, each with distinct characteristics and use cases. Understanding the differences between Record
, plain objects, Map
, and indexed types will help you choose the right approach for your specific needs.
Record
vs. plain objectsAt first glance, a Record
type might seem similar to a regular object, but there are important differences:
type RecordObject = Record<string, number>;\\n\\n// Usage is similar\\nconst plainObj = { a: 1, b: 2 };\\nconst recordObj: RecordObject = { a: 1, b: 2 };\\n\\n
The usage of the two types is similar. Their difference lies in the type safety of Record
type. The Record
type provides stronger type safety for keys, especially when used with union types or literals:
type ValidKeys = \'id\' | \'name\' | \'age\';\\ntype UserRecord = Record<ValidKeys, string | number>;\\n\\nconst user: UserRecord = {\\n id: 10,\\n name: \'Peter\',\\n age: 37,\\n // address: \'123 Main St\' // Error: Object literal may only specify known properties\\n};\\n\\n
Record
vs. Map
While both Record
and JavaScript’s built-in [Map](https://blog.logrocket.com/typescript-mapped-types/)
can store key-value pairs, they serve different purposes and have distinct characteristics:
type UserRecord = Record<string, { age: number, active: boolean }>;\\nconst userRecord: UserRecord = {\\n \'john\': { age: 30, active: true },\\n \'jane\': { age: 28, active: false }\\n};\\n\\nconst userMap = new Map<string, { age: number, active: boolean }>();\\nuserMap.set(\'john\', { age: 30, active: true });\\nuserMap.set(\'jane\', { age: 28, active: false });\\n\\n
The following table compares TypeScript Record
and Map
types across different features:
\\n | Record | \\nMap | \\n
---|---|---|
Performance | \\nOptimized for static data access; faster for direct property access | \\nOptimized for frequent additions and removals; slightly slower for lookups | \\n
Type safety | \\nStrong compile-time type checking for both keys and values | \\nRuntime type safety; any type can be used as keys, including objects and functions | \\n
Use cases | \\nIdeal for static dictionaries with predetermined keys | \\nBetter for dynamic collections where keys/values change frequently | \\n
Syntax | \\n\\nRecord<KeyType, ValueType> | \\n\\nnew Map<KeyType, ValueType>() | \\n
Key types | \\nLimited to string , number , or symbol (which convert to strings) | \\nSupports any data type as keys, including objects and functions | \\n
Methods | \\nStandard object operations (dot notation, brackets, delete operator) | \\nBuilt-in methods: get() , set() , has() , delete() , clear() , size | \\n
Iteration | \\nNo guaranteed order (though modern engines maintain creation order) | \\nPreserves insertion order when iterating | \\n
Memory | \\nBetter memory efficiency for static data | \\nHigher memory overhead but better for frequently changing collections | \\n
Record
vs. indexed typesTypeScript also offers indexed types, which provide another way to define key-value structures:
\\n// Indexed type\\ninterface IndexedUser {\\n [key: string]: { age: number, active: boolean };\\n}\\n\\ntype RecordUser = Record<string, { age: number, active: boolean }>;\\n\\n
Index signature offers more flexibility than Record
type as it can be combined with explicit properties like a normal interface definition:
interface MixedInterface { \\n id: number; // Explicit property \\n name: string; // Explicit property \\n [key: string]: any; // Any other properties \\n};\\n\\n
You will mostly use the indexed type signature when you need to mix specific properties with dynamic properties, or you have relaxed and simpler typing requirements.
\\nRecord
The power of TypeScript’s Record
type is that we can use it to model dictionaries with a fixed number of keys, as we have seen earlier. This involves the use of the union type to specify the allowed keys. For example, we could use both types to model a university’s courses:
type Course = \\"Computer Science\\" | \\"Mathematics\\" | \\"Literature\\"\\n\\ninterface CourseInfo {\\n professor: string\\n cfu: number\\n}\\n\\nconst courses: Record<Course, CourseInfo> = {\\n \\"Computer Science\\": {\\n professor: \\"Mary Jane\\",\\n cfu: 12\\n },\\n \\"Mathematics\\": {\\n professor: \\"John Doe\\",\\n cfu: 12\\n },\\n \\"Literature\\": {\\n professor: \\"Frank Purple\\",\\n cfu: 12\\n }\\n}\\n\\n
In this example, we defined a union type named Course
that will list the names of classes and an interface type named CourseInfo
that will hold some general details about the courses. Then, we used a Record
type to match each Course
with its CourseInfo
.
So far, so good — it all looks like quite a simple dictionary. The real strength of the Record
type is that TypeScript will detect whether we missed a Course
.
Let’s say we didn’t include an entry for Literature
. We’d get the following error at compile time:
“Property Literature
is missing in type { \\"Computer Science\\": { professor: string; cfu: number; }; Mathematics: { professor: string; cfu: number; }; }
but required in type Record<Course, CourseInfo>
.”
In this example, TypeScript is clearly telling us that Literature
is missing.
TypeScript will also detect if we add entries for values that are not defined in Course
. Let’s say we added another entry in Course
for a History
class. Because we didn’t include History
as a Course
type, we’d get the following compilation error:
“Object literal may only specify known properties, and \\"History\\"
does not exist in type Record<Course, CourseInfo>
.”
Record
dataWe can access data related to each Course
as we would with any other dictionary:
console.log(courses[\\"Literature\\"])\\n\\n
The statement above prints the following output:
\\n{ \\"teacher\\": \\"Frank Purple\\", \\"cfu\\": 12 }\\n\\n
Next, let’s take a look at some ways to iterate over the keys of a Record
type as a data collection.
In this section, we will explore various methods to iterate over Record
types, including forEach
, for...in
, Object.keys()
, and Object.values()
. Understanding how to iterate over TypeScript Record
types is crucial for effectively accessing the data within these structures.
forEach
To use forEach
with a Record
type, you first need to convert the Record
to an array of key-value pairs. This can be done using Object.entries()
:
type Course = \\"Computer Science\\" | \\"Mathematics\\" | \\"Literature\\";\\ninterface CourseInfo {\\n professor: string;\\n cfu: number;\\n}\\nconst courses: Record<Course, CourseInfo> = {\\n \\"Computer Science\\": { professor: \\"Mary Jane\\", cfu: 12 },\\n \\"Mathematics\\": { professor: \\"John Doe\\", cfu: 12 },\\n \\"Literature\\": { professor: \\"Frank Purple\\", cfu: 12 },\\n};\\nObject.entries(courses).forEach(([key, value]) => {\\n console.log(`${key}: ${value.professor}, ${value.cfu}`);\\n});\\n\\n
for...in
The for...in
loop allows iterating over the keys of a record:
for (const key in courses) {\\n if (courses.hasOwnProperty(key)) {\\n const course = courses[key as Course];\\n console.log(`${key}: ${course.professor}, ${course.cfu}`);\\n }\\n}\\n\\n
Object.keys()
Object.keys()
returns an array of the record’s keys, which can then be iterated over using forEach
or any loop:
Object.keys(courses).forEach((key) => {\\n const course = courses[key as Course];\\n console.log(`${key}: ${course.professor}, ${course.cfu}`);\\n});\\n\\n
Object.values()
Object.values()
returns an array of the record’s values, which can be iterated over:
Object.values(courses).forEach((course) => {\\n console.log(`${course.professor}, ${course.cfu}`);\\n});\\n\\n
Object.entries()
Object.entries()
returns an array of key-value pairs, allowing you to use array destructuring within the loop:
Object.entries(courses).forEach(([key, value]) => {\\n console.log(`${key}: ${value.professor}, ${value.cfu}`);\\n});\\n\\n
TypeScript’s Record
type can be utilized for more advanced patterns, such as selective type mapping with the Pick
type and implementing dynamic key-value pairs. These use cases provide additional flexibility and control when working with complex data structures.
Pick
type with Record
for selective type mappingThe Pick
type in TypeScript allows us to create a new type by selecting specific properties from an existing type. When combined with the Record
type, it becomes a powerful tool for creating dictionaries with only a subset of properties.
Suppose we have a CourseInfo
interface with several properties, but we only want to map a subset of these properties in our Record
:
interface CourseInfo {\\n professor: string;\\n cfu: number;\\n semester: string;\\n students: number;\\n}\\n\\ntype SelectedCourseInfo = Pick<CourseInfo, \\"professor\\" | \\"cfu\\">;\\n\\ntype Course = \\"Computer Science\\" | \\"Mathematics\\" | \\"Literature\\";\\n\\nconst courses: Record<Course, SelectedCourseInfo> = {\\n \\"Computer Science\\": { professor: \\"Mary Jane\\", cfu: 12 },\\n \\"Mathematics\\": { professor: \\"John Doe\\", cfu: 12 },\\n \\"Literature\\": { professor: \\"Frank Purple\\", cfu: 12 },\\n};\\n\\n
In the above example, we used Pick<CourseInfo, \\"professor\\" | \\"cfu\\">
to create a new type SelectedCourseInfo
that only includes the professor
and the cfu
properties from CourseInfo
. Then, we defined a Record
type that maps each Course
to SelectedCourseInfo
.
There’s the common question of how to deal with dynamic or unknown keys while ensuring type safety at runtime. How do we type user-generated objects, or data coming in from external APIs with structures that you’re not certain of beforehand?
\\nThe Record
type provides a couple of patterns for handling this common scenario effectively.
Record
with string keysThe simplest approach out there is to use Record
with string
as the key type, while constraining the value types:
type PreferenceValue = string | boolean | number;\\ntype Preferences = Record<string, PreferenceValue>;\\n\\nconst userPreferences: Preferences = {};\\n\\nfunction setPreference(key: string, value: PreferenceValue) {\\n userPreferences[key] = value;\\n}\\n\\nsetPreference(\'theme\', \'dark\');\\nsetPreference(\'notifications\', true);\\n// setPreference(\'fontSize\', []); // Error: Type \'never[]\' is not assignable to type \'PreferenceValue\'\\n\\n
This approach provides type safety for the union values while allowing any string to be used as the key.
\\n\\nThis is for when you have some known keys but also need to allow arbitrary ones:
\\n// Define known keys\\ntype KnownPreferenceKeys = \'theme\' | \'notifications\' | \'fontSize\';\\n\\n// Create a type for known and unknown keys\\ntype PreferenceKey = KnownPreferenceKeys | string;\\n\\ntype PreferenceValue = string | boolean | number;\\ntype Preferences = Record<PreferenceKey, PreferenceValue>;\\n\\nconst preferences: Preferences = {\\n theme: \'dark\',\\n notifications: true,\\n fontSize: 14,\\n // Allow additional properties\\n customSetting: \'value\'\\n};\\n\\n
This approach provides better autocompletion for known keys while still allowing arbitrary string keys.
\\nRecord
with other utility typesTypeScript’s utility types can be combined with Record
to create more complex and type safe data structures.
ReadOnly
with Record
The ReadOnly
type makes all the properties of a type read-only. This is especially useful when you want to ensure that dictionary entries cannot be modified:
type ReadonlyCourseInfo = Readonly<CourseInfo>;\\n\\nconst readonlyCourses: Record<Course, ReadonlyCourseInfo> = {\\n \\"Computer Science\\": { professor: \\"Mary Jane\\", cfu: 12, semester: \\"Fall\\", students: 100 },\\n \\"Mathematics\\": { professor: \\"John Doe\\", cfu: 12, semester: \\"Spring\\", students: 80 },\\n \\"Literature\\": { professor: \\"Frank Purple\\", cfu: 12, semester: \\"Fall\\", students: 60 },\\n};\\n\\n// Trying to modify a readonly property will result in a compile-time error\\n// readonlyCourses[\\"Computer Science\\"].cfu = 14; // Error: Cannot assign to \'cfu\' because it is a read-only property. \\n\\n
In the code above, Readonly<CourseInfo>
ensures that all properties of CourseInfo
are read-only, preventing modifications. Trying to modify a read-only property will result in a compile-time error. The below should fail/throw an error:
readonlyCourses[\\"Computer Science\\"].cfu = 14;\\n// Error: Cannot assign to \'cfu\' because it is a read-only property.\\n\\n
Partial
with Record
The Partial
type makes all properties of a type optional. This is especially useful when you want to create a dictionary where some entries may not have all properties defined:
type PartialCourseInfo = Partial<CourseInfo>;\\n\\nconst partialCourses: Record<Course, PartialCourseInfo> = {\\n \\"Computer Science\\": { professor: \\"Mary Jane\\" },\\n \\"Mathematics\\": { cfu: 12 },\\n \\"Literature\\": {},\\n};\\n\\n
In the code above, Partial<CourseInfo>
makes all properties of CourseInfo
optional, allowing us to create a Record
where some courses may not have all properties defined or even have none of the properties defined.
Record
While you can often use TypeScript’s Record
type, there are times when it is not the ideal solution. You must understand its performance characteristics and limitations to make better design decisions while using it.
Record
type is primarily a compile-time type construct with no runtime overhead; at runtime, a Record
is just a JavaScript object.
However, how you structure and use it can impact application performance. Consider, for example, that direct property access (obj.propName
) is relatively faster than dynamic access (obj[propName]
).
Also, Record
objects with a large number of entries can consume significant memory and, when nesting, choose flatter data structures over deeply nested Record
types:
// Deep nesting\\ntype DeepRecord = Record<string, Record<string, Record<string, string>>>;\\n\\n// Flatter structure\\ntype FlatRecord = Record<string, { category: string; subcategory: string; value: string }>;\\n\\n
Record
Let’s consider some scenarios where using Record
might not be the best choice:
Map
might be more appropriate than using a Record
type. Record
works best when you can define the shape of your data structure at compile timeMap
provides better performance for frequent mutations than Record
, given that it’s optimized for stable and less frequently changing dataMap
. Record
keys are limited to string, number, or symbol types (which are converted to strings), but Map
supports the use of objects, functions, or other non-primitive values as keysRecord
In this article, we discussed TypeScript’s built-in Record<K, V>
utility type. We examined its basic usage and behavior, comparing it with other key-value structures like plain objects and Maps
to understand when Record
provides the most value.
We discussed various approaches for iterating over TypeScript record types using methods such as forEach
, for...in
, Object.keys()
, and Object.values()
, which enable effective manipulation and access to data within these structures.
The article also covered advanced patterns, showing how to combine Record
with TypeScript’s other utility types like Pick
, Partial
, and Readonly
to create more sophisticated and type-safe data structures. We talked about practical applications, including handling dynamic keys and creating selective type mappings.
Finally, we reviewed performance considerations and scenarios where alternative approaches might be more appropriate than using the Record
type.
The Record
type is a very useful type in the TypeScript. While some use cases may be specialized, it offers significant value for creating safe, maintainable code in many applications. How do you use TypeScript “Record in your projects? Let us know in the comments!
Would you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nWith the recent breakthroughs in AI and large language models, there has been a major shift in how we approach building software. One notable change is a new workflow called vibe coding, where you skip the traditional software development pattern and let AI help you build instead.
\\nIn this article, we’ll break down what vibe coding actually means, how the term became popularized, and why it’s catching on. We’ll also walk through a real example by vibe coding a full-stack time capsule app from scratch in just a few minutes. To wrap it up, we’ll look at the pros, the downsides, and whether this workflow is viable in the long run.
\\nIn simple words, vibe coding is the process of building software by vibe, i.e., letting ideas flow and using AI to do the heavy lifting, without getting stuck on structure, rules, or boilerplate.
\\nCompared to the traditional software development workflow, where you’d plan things out, architect the system, and manually code each part, vibe coding is different. When you’re vibe coding, you describe what you want, use AI to generate most of it, and just tweak as you go. This is also made possible by how good AI tools and LLMs have gotten, especially code-aware ones that can understand even rough prompts and generate working code.
\\nThis practice has been around for a while now. But the term “vibe coding” was popularized by Andrej Karpath, cofounder of OpenAI. In this tweet, Karpathy casually described a new way of building apps by just “seeing stuff, saying stuff, running stuff, and copy-pasting stuff,” with barely any typing and mostly just talking to an AI to get things done:
The tweet resonated with a lot of people, gave the practice a name that sticks, and inspired even more people to lean into vibe coding.
\\nThe first step in the vibe coding process is to get an idea of what you want to build. Ironically, AI tools can also help generate these ideas or refine one you already have. Once your idea is set and you’ve either thought it through or jotted it down, the next step is to pick a tool.
\\nSome AI coding tools that don’t require much setup and let you vibe code directly from your browser. These include Claude, Replit, Lovable, Bolt, and v0 by Vercel. However, if you want more control over what’s happening under the hood, you can use tools like Cursor or Windsurf instead.
\\nWith your idea and vibe coding tool ready, the next step is to prompt the AI to start building it out. From there, you iterate, then iterate again until you get your desired result.
\\nEnough said; let’s see vibe coding in practice!
\\nThe world is moving so fast, and I want to be able to look back someday (five, 10, or 20 years from now) and see what certain moments look like. That’s why I thought it’d be cool to vibe code a digital time capsule app, where I can write and upload pictures or videos of what things look like today, then lock them until a future date. I also have a specific tech stack in mind: Next.js for the front-end and API, and MySQL for storage.
\\nWith the idea set, the next step was to open Cursor, one of my go-to vibe coding tools, and prompt it with the following idea:
\\nCreate a time capsule app where the user can set a title, description, and a future unlock date, and open the capsule once that date arrives. They should be able to upload media (images, videos, etc.). The UI should use a box design for the capsule to give it an aesthetic feel. Build it using Next.js and MySQL (no ORM), and style it with Tailwind CSS:
Cursor then presented me with a command to install Next.js with the Tailwind and TypeScript setup, as well as the MySQL library. Then, it generated the project files, including a schema.sql file with the basic table setup, and even updated the README.md with instructions on how to run the app. I followed the steps, ran the command to set up the database, and launched the app.
\\nBut when it loaded, I still saw the default Next.js index page. Apparently, it forgot to update that, so I followed up with, “The index page is not properly updated.”
\\nThis was also a good reminder that while these AI tools seem impressive, they still miss basic things:
It fixed it. I reloaded the app, and the initial output looked like this:
This is a nice start, but obviously, we don’t want the form to be the first thing users see when they open the app. So, I added this prompt:
\\nThe form should not be the first thing the user sees when they open the app. Instead, show a title, a brief description, and any time capsules they’ve already created. If they haven’t created any yet, display a message letting them know that their capsule list is empty and encourage them to create one. The time capsule creation form can then be shown as a popup:
And with that update, the homepage now looked like this:
After a few iterations, it looked something like this:
At this point, I thought it would make sense to let users set a closing date for each capsule so they could keep adding content or uploading images until that date. So I prompted:
\\nLet’s update each capsule to have a closing date period, so that users can still upload media or update the description until that period is reached.
\\n\\nThe AI handled the logic and updated the database schema to include a new closing date field. But it didn’t add any UI changes for this feature, so I followed up with the following:
\\nIf the closing date has not been reached, display the media the user has uploaded so far (for each capsule), along with options to delete and add more media, and to update the capsule description:
At this stage, everything worked as expected. If a capsule’s closing date hasn’t passed, I can update it with new images or change the description. Also, the app now properly locks a capsule until its open date is reached, and I can’t view its contents before then:
That pretty much sums up vibe coding in practice. You mostly tell the AI what to do, ask it to fix things when it messes up, and repeat until you get your desired result.
\\nVibe coding has some real advantages, including:
\\nWhile vibe coding is a powerful workflow, it still has some major drawbacks
\\nThe answer mostly depends on your vibe coding approach.
\\nIf your approach is to entirely rely on the AI to write all your code just by describing what you want, it won’t work well in the long run. You’ll lose track of how everything fits together, and maintaining your app will become harder over time.
\\nA better long-term approach is to treat AI coding tools like a collaborator, i.e, start small, explain what you’re building, and ask for help step by step. Instead of just asking it to fix or implement things, let it guide you through the structure and logic behind each part.
\\nThis way, you’re building with context. Plus, you get to stay in control of the architecture and logic of your application.
\\nIn this article, we explored vibe coding, how it became popular, its major advantages and drawbacks, how it works in practice by building a full-stack app using vibe coding, and whether it’s viable in the long term. Vibe coding, if done well, has a lot of potential, but it also comes with challenges that shouldn’t be ignored.
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nAI coding tools have become day-to-day partners for many developers. These tools are helping devs ship products faster than ever, and their usage is becoming more prevalent.
\\nAccording to Y Combinator managing partner Jared Friedman, nearly a quarter of the W25 startup batch have codebases that were almost entirely generated by AI. While not every developer embraces this change, refusing to do so feels like holding on to a BlackBerry in an iPhone world.
\\nThese tools have become very capable, sometimes uncomfortably so. When I see them take on hours of hard work in minutes, I wonder about my job security. Yes, they come with plenty of disclaimers about potential errors, but their quality is undeniable.
\\nIn this article, we will explore the leading AI coding tools for 2025, from IDEs to conversational AI assistants. We’ll also run a little test of efficiency to help guide your exploration.
\\nLet’s discuss 10 AI coding tools for developers in 2025:
\\nCursor is one of the best coding tools around. It goes beyond basic code completion with its Agent mode, a feature that doesn’t get enough credit and attention. Unlike other AI coding tools that merely suggest snippets, Cursor can actually complete entire programming tasks from start to finish.
\\nCombined with its ability to automatically detect and fix lint errors, it reduces debugging time. Cursor can write and execute terminal commands for you after you approve them. This bridges the gap between you trying to remember and you implementing.
\\nWhat surprised me most about Cursor is its smart cursor prediction system that anticipates where you’ll place your cursor next. This seems simple until you experience how it makes coding easy and telepathic.
\\nWhile most users know about Cursor’s chat function, few take full advantage of its deep integration capabilities like @Web for pulling in up-to-date information from the internet or the ability to reference documentation directly with @LibraryName.
\\nPricing: Cursor AI offers three pricing tiers: a free Hobby plan with limited completions and premium requests; a $20/month Pro plan with unlimited completions and more premium requests; and a $40/user/month Business plan adding enterprise features like SSO, centralized billing, and admin controls.
\\nGitHub Copilot’s model flexibility is something you may want to try. It lets you switch between Anthropic’s Claude 3.5 Sonnet, OpenAI o3, and GPT-4o, depending on your task. This means you can use preferred models for different coding problems, optimizing for speed with one model and deep reasoning with another, without switching tools.
\\nAgent Mode takes Copilot beyond simple code completion into a true collaborative form. It can independently gather context across multiple files, suggest edits, test changes, and validate them for your approval.
\\nCopilot’s growing ecosystem of third-party extensions significantly expands its capabilities beyond just writing code. You can check logs, create feature flags, and even deploy applications directly from Copilot Chat. Plus, it works perfectly with popular languages.
\\nPricing: GitHub copilot offers three tiers: Free ($0) with basic features and unlimited repositories; Team ($4/user/month) adding collaboration tools and more CI/CD minutes; and Enterprise ($21+/user/month) with advanced security and compliance features. It’s available as an optional add-on across all plans.
\\nCodeium was introduced as a GitHub Copilot alternative. As of today, I’ll say it has not only lived up to that idea, but has made a name for itself. Codeium offers something truly uncommon: a completely free tier for individual developers with no hidden catches.
\\nCodeium provides its abilities at zero cost, which means students and professional developers can try out this tool without worrying about subscription fees.
\\nThe company’s approach to training data represents a significant legal advantage. Codeium has avoided training on non-permissive code (like GPL-licensed repositories), and this has provided users with protection from copyright and licensing risks that have created controversy around other tools.
\\nThis ethical stance on training data gives peace of mind to both individual developers and enterprises concerned about IP liability.
\\n\\nCodeium has gone beyond only offering extensions. They now have a purpose-built Windsurf Editor, which is their dedicated IDE designed specifically to maximize their AI capabilities.
\\nThis is good growth and a shift from simply adding AI features to existing IDEs. Instead, Codium represents an environment optimized for AI-assisted development.
\\nPricing: Codeium offers five pricing tiers: a Free plan with limited credits; Pro ($15/month) with 500 premium credits; Pro Ultimate ($60/month) with unlimited prompt credits; Teams ($35/user/month) with pooled credits; and Enterprise with custom solutions. Paid plans include access to advanced models (GPT-4o, Claude Sonnet), faster speeds, and expanded features, with additional credits available for purchase.
\\nClaude, created by Anthropic, is my go-to AI coding tool. It’s effective in creating production-level code with debugging and conversational querying features. Though it requires precise prompting for the best results, it also has a low error rate, which makes it reliable for professional programming.
\\nClaude offers models ranging from the advanced 3.7 Sonnet to the speedy Haiku version. This flexibility allows developers to choose between intelligence, speed, and cost depending on their project requirements.
\\nHigher performance models will deliver superior results, but they come with increased costs that smaller teams or newbie developers might not enjoy.
\\nClaude provides enterprise security with SOC II certification and HIPAA compliance options for sensitive development. Its resistance to misuse and copyright indemnity protections serve high-trust industries requiring secure coding environments.
\\nPricing: Claude’s free version offers Claude 3.7 Sonnet but with certain limitations. Its Pro version costs $18. Depending on your region, you may pay as little as $10.
\\nChatGPT can be referred to as the father of the AI coding tools.
\\nIt was, of course, created by OpenAI. ChatGPT offers a high-quality, comprehensive coding toolkit. I admire how ChatGPT seamlessly integrates web browsing to check documentation and analyze code screenshots through GPT Vision; it’s a feature other tools are still catching up on.
\\nFor data-oriented programming, ChatGPT’s Advanced Data Analysis (ADA) provides powerful capabilities that transform it into a coding partner for data scientists and analysts. Developers can upload CSV, Excel, or JSON datasets for ChatGPT to analyze, clean, and visualize. They can then generate the corresponding Python or R code to reproduce these operations, effectively.
\\nAs an original AI conversational tool, it has evolved from GPT-3.5 to GPT-4 and now GPT-4o, all while facing increasing competition from specialized coding tools. It also provides a GPT Store, where developers can access or create specialized coding assistants tailored to specific languages, frameworks, or development workflows.
\\nPricing: ChatGPT free version gives you access to GPT‑4o mini, real-time data from the web with search, and limited access to GPT‑4o and o3‑mini. The Plus version generally costs $20, depending on your region.
\\nDeepSeek, the OpenAI frenemy, has secured a spot on this list. DeepSeek-R1 matches OpenAI-o1’s performance while being fully open-source under an MIT license. DeepSeek-R1 is free for developers; they can commercialize the model without any restrictions.
\\nDeepSeek goes beyond its flagship model; it has open-sourced six models ranging from 32B to 70B parameters that rival OpenAI-o1-mini. These smaller models make AI more accessible to researchers and developers with limited computational resources.
\\nPricing: Deepthinking (R1) is free. You can check out their pricing model, as developers may want to use their API. Its API pricing starts at $0.14-$0.55 per million input tokens and $2.19 per million output tokens. DeepSeek offers a significantly cheaper alternative to any developer who is seeking better reasoning capabilities in math, code, and complex problem-solving.
\\nPerlexity is yet another beautiful coding tool that is worth mentioning. This AI tool doesn’t rely solely on training data; it also pulls from current documentation, StackOverflow discussions, and GitHub repositories to provide up-to-date solutions.
\\nThis makes it particularly valuable for learning new languages, frameworks, or libraries that may have evolved since other AI models’ training cutoffs.
\\nThe platform’s file upload features enable it to analyze your existing codebase or documentation. Like other coding tools, Perplexity allows developers to upload partial projects, error logs, or technical specifications and ask targeted questions about improvements, bugs, or implementation strategies.
\\nPricing: Perplexity AI offers unlimited Free searches and three Pro searches per day. The professional plan is $20.
\\nWho would have thought a social media-focused AI would make the list? The Grok 3 version has earned its spot.
\\nGrok, run by X, works as your coding partner, giving you code solutions without needing to run or debug them first.
\\nIt pulls fresh information from the web, so you always have the best and latest programming tips and library updates at your fingertips. The new Grok 3 is available for free with basic features, while paying users get extras like Voice Mode. This makes it accessible to anyone.
\\nGrok’s special DeepSearch feature digs up coding solutions. When paired with Grok Think, which connects complex ideas together, it helps developers with tough programming problems.
\\nPricing: Grok offers Grok 3, its smartest model so far, for free, but with limited advantages. Subscribers to X’s Premium Plus are given access to all the advantages that come with it; its pricing is about $40 per month.
\\nBolt.new takes the AI coding assistance to another level entirely. It stands out by offering a complete development environment directly in your browser. While other AI coding tools typically just generate code (I wouldn’t mention the others to avoid an AI beef), Bolt allows users to install npm packages, run Node.js servers, and connect to third-party APIs, all without any local setup.
\\nThe platform’s most innovative feature gives AI models full control over the development environment. The AI can create and modify files, execute terminal commands, install dependencies, and manage the entire project from creation to deployment. This approach turns AI from a mere suggestion tool into an active developer.
\\nBolt.new supports the most popular JavaScript frameworks and libraries, including Next.js, Astro, and Tailwind CSS. Users simply specify their preferred technologies in prompts, and Bolt configures everything accordingly.
\\nThe ability to share projects via URL makes collaboration straightforward, benefiting both technical developers and team members without coding experience who can now participate more directly in the development process. You can easily import outputs from Figma.
\\nPricing: Bolt offers a free version and various pricing models starting from Pro to Pro 200. Pro starts at $20, Pro 50 at $50, Pro 100 at $100, and Pro 200 – you guessed right – will be $200.
\\nThis list wouldn’t be complete if I didn’t mention v0. v0 by Vercel has been able to generate clean code based on simple descriptions. It stands out by creating complete websites from conversational inputs, handling everything from code tree structure to styling without requiring technical specifications. It also allows you to preview your code results after it’s done.
\\nv0 creates your code in Next.js by default with the styling by ShadCN and can also integrate with popular frameworks such as Vue and Angular. Amongst all the AI tools I have examined, v0 was mentioned because of its ability to create good-looking websites.
\\nIt does well with basic functionalities, but you probably wouldn’t want to use it for complex logic. If you’re looking for an AI coding tool to quickly spin up great UI, give v0 a try.
\\nPricing: v0 has three pricing models for individuals: the free, the premium at $20, and the ultra at $200.
\\nAll the AI conversational tools above have a free version. Students and developers who are not willing to pay $20/month can still access them. The only way to figure out the best tools is to put them to the test on your actual projects.
\\nWhat works brilliantly for one developer might frustrate another. As a frontend developer, I will be putting these tools to the test to solve a difficult LeetCode problem.
\\nI devised a Leetcode problem on Wildcard matching. Wildcard pattern matching checks if a string matches a pattern containing special characters: ?
(matches any single character) and \'\'
(matches any sequence of characters, including none).
The challenge is meant to determine if the entire input string matches the pattern, requiring efficient handling of the multiple matching possibilities created by characters.
\\nIt’s a difficult one from my experience, and also doesn’t have many test cases. Let’s see how many tries it takes these models to solve it:
\\nGiven an input string (s) and a pattern (p), implement wildcard pattern matching with support for \'?\' and \'*\' where:\\n\\n\'?\' Matches any single character.\\n\'*\' Matches any sequence of characters (including the empty sequence).\\nThe matching should cover the entire input string (not partial).\\n\\n \\n\\nExample 1:\\n\\nInput: s = \\"aa\\", p = \\"a\\"\\nOutput: false\\nExplanation: \\"a\\" does not match the entire string \\"aa\\".\\nExample 2:\\n\\nInput: s = \\"aa\\", p = \\"*\\"\\nOutput: true\\nExplanation: \'*\' matches any sequence.\\nExample 3:\\n\\nInput: s = \\"cb\\", p = \\"?a\\"\\nOutput: false\\nExplanation: \'?\' matches \'c\', but the second letter is \'a\', which does not match \'b\'.\\n \\n\\nConstraints:\\n\\n0 <= s.length, p.length <= 2000\\ns contains only lowercase English letters.\\np contains only lowercase English letters, \'?\' or \'*\'.\\n
Here are the results:
\\nAI Model | \\nFirst Try | \\nSecond Try | \\n
---|---|---|
Claude 3.7 Sonnet(Free) | \\nSuccessful | \\nNo need for a second | \\n
ChatGPT (Free) | \\nSuccessful | \\nNo need for a second | \\n
Deepthinking (R1) | \\nSuccessful | \\nNo need for a second | \\n
Perplexity (Free) | \\nSuccessful | \\nNo need for a second | \\n
Grok 3(Free) | \\nSuccessful | \\nNo need for a second | \\n
As you can see, all of these free tools are viable and effective.
\\nUnlike other conversational AI, Bolt.new and v0 have specific use cases in the sense that they are building-focused, so they will have a separate test. Here’s what I came up with:
\\n“Create a modern, responsive landing page for the LogRocket Blog using Next.js, Tailwind CSS, and Lucide React Icons ONLY. The page should feature a very clean and professional design with intuitive navigation and content organization.
\\nImplement a light/dark theme toggle that detects system preferences while allowing users to manually switch themes.
\\nThe landing page should include:
\\nAnd ensure the design is fully responsive across all device sizes, with special attention to typography, spacing, and content hierarchy.”
\\nIn their free versions, Bolt.new did exceptionally well. The theme worked, and the responsiveness was perfect on the first try
\\nWith v0, the theme wasn’t working quite as well. The responsiveness was fine, but it needed a few more touches. Still, I found the output useful.
In this blog article, we examined 10 AI coding tools, including IDEs and conversational AI. We’ve seen how smart and useful they can be. Developers should always be aware of potential errors and hallucinations, but these are about the only cons I can come up with.
\\nThere are some additional tools I didn’t mention in this article that might be worth exploring: OpenAI tools (Codex), Tabnine, Amazon CodeWhisperer, Replit AI, and Qodo.
\\nMy honest advice for beginners will be to learn the traditional way first. You won’t truly appreciate what these tools offer until you’ve felt the pain they’re solving.
\\nBut intermediate devs? If you haven’t jumped in yet, you’re missing out.
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nuseRef
in React?\\n useRef
syntax\\n useRef
syntax\\n useRef
and useState
\\n useRef
in React\\n useRef
\\n useRef
\\n useRef
best practices\\n useRef
with forwardRef
to pass refs from parent to child\\n useRef
\\n Functional components in React utilize built-in Hooks to implement stateful behavior and lifecycle methods. One of these Hooks is useRef
, which is pretty convenient for referencing values in React.
In this guide, we will examine the useRef
Hook in React, learn how to use it, see some of its applications, and discuss best practices to ensure its consistent implementation in future React apps.
useRef
in React?When working with React, we should always utilize built-in library tools, which are created and optimized for specific scenarios. The useRef
Hook is one such tool that helps us handle references to mutable values in React. It acts as a container storing a reference to a mutable value while persisting it between component re-renders.
The following flowchart illustrates the persistence of a referenced value with useRef
without triggering any re-render to the corresponding component:
In technical terms, the container here is called a mutable reference object, and the value it holds is called a reference.
\\nIf you are new to React, this React Hooks cheat sheet can teach you how to use all the major Hooks in React with practical examples.
\\nuseRef
syntaxWhen used, the useRef
Hook returns a mutable object with a current
property, which is a reference to the object’s current value.
Let’s understand useRef
with an example, while also exploring its syntax and observing how it functions without triggering re-renders when its value changes:
import { useRef } from \\"react\\";\\n\\nexport default function Counter() {\\n const countRef = useRef(0);\\n\\n const incrementCount = () => {\\n countRef.current += 1;\\n console.log(`Current count: ${countRef.current}`);\\n };\\n\\n return (\\n <div>\\n <p>Check the console for updates.</p>\\n <button onClick={incrementCount}>Increment</button>\\n </div>\\n );\\n}\\n\\n
In the above example, the countRef
variable returns a mutable reference object with zero as its initial reference value. Using the object’s current property, this value is updated whenever the “increment” button is pressed:
See the Pen
\\nReact useRef Counter example by Rahul (@c99rahul)
\\non CodePen.
A change in the countRef.current
property here will not re-render the corresponding <Counter/>
component, and that’s why we are observing changes in the developer console in this example rather than in the UI.
useRef
syntaxWhen using TypeScript, it is important to specify the type of data a useRef
object will hold. Here’s a quick snippet showing both simple and advanced TypeScript type specifications useRef
:
interface UserInfo {\\n name: string;\\n age?: number;\\n}\\n\\nfunction SomeComponent() {\\n const messageRef = useRef<string | null>(null);\\n const inputRef = useRef<HTMLInputElement | null>(null);\\n const userRef = useRef<UserInfo>({ name: \\"John Doe\\" });\\n // ...\\n}\\n\\n
useRef
and useState
Like useRef
, the useState
Hook also persists values, but there’s a subtle difference between them.
\\nThe major difference between these two Hooks is the basic philosophy of their functioning, which broadly defines their three main distinctions:
useState
Hook always provides the current state value on every render, while the useRef
Hook is designed to persist values across rendersuseState
always re-renders the related component, while a change in a useRef
reference never changes the UIuseState
, we can’t change the state directly. Instead, we use a getter-setter pattern to mutate and manage values. With a useRef
object, we can directly mutate the value by modifying the current property of the reference object whenever requiredWe now have a general understanding of how useRef
differs fundamentally from useState
and what it can achieve on its own. For a broader overview of this differentiation, you should see this useState vs. useRef guide
useRef
in ReactTo identify the need for useRef
in your components, determine whether or not a value or reference should persist without dealing with renders.
Let’s look at some general applications of useRef
in React that can help you decide immediately if your component really needs it.
useRef
The useRef
Hook works great for handling imperative actions like DOM manipulations.
Instead of using JavaScript Web API methods, such as querySelector
or getElementById
, to select a DOM element in React, we utilize the useRef
Hook to hold its reference.
This approach keeps the reference to that element intact across re-renders and makes things work smoothly without bypassing the virtual DOM, maintaining your app’s integrity.
\\nLet’s access a DOM element with useRef
and use this reference to get the name of the HTML tag it is built with:
function ElementTellingItsTagName() {\\n const elementRef = useRef(null);\\n const [tagName, setTagName] = useState(\\"...\\");\\n\\n useEffect(() => {\\n setTagName(elementRef.current.tagName);\\n }, []);\\n\\n return (\\n <p ref={elementRef}>\\n This element is created using a{\\" \\"}\\n <{tagName.toLowerCase()}>
tag. </p> ); }
\\nNote that the empty array passed as a useEffect
dependency ensures the side-effect runs only once after the component mounts and not on subsequent renders or unmounts:
See the Pen
\\nDOM element selection with useRef by Rahul (@c99rahul)
\\non CodePen.
Following the same path as storing references to DOM elements, we can also manage timeouts, event handlers, and observers with useRef
, which serves different purposes.
Here’s a small example keeping reference of a timeout with useRef
, so that we could clear it at the time of component cleanup. This is important from a memory management perspective:
function TimeoutExample() {\\n const timeoutRef = useRef(null);\\n const [message, setMessage] = useState(\\"Waiting...\\");\\n\\n useEffect(() => {\\n timeoutRef.current = setTimeout(() => {\\n setMessage(\\"Timeout completed!\\");\\n }, 3000);\\n\\n // Cleanup when component unmounts\\n return () => clearTimeout(timeoutRef.current);\\n }, []);\\n\\n return <p>{message}</p>;\\n}\\n\\n
Managing an observer would be slightly different, where you keep references of both the target node and the observer attached to that node:
\\nfunction IntersectionObserverExample() {\\n const targetRef = useRef(null);\\n const observerRef = useRef(null);\\n\\n useEffect(() => {\\n observerRef.current = new IntersectionObserver(\\n (entries) => { /* Do something */ },\\n { threshold: 0.1 }\\n );\\n\\n if (targetRef.current) {\\n observerRef.current.observe(targetRef.current);\\n }\\n\\n // Clean up\\n return () => {\\n if (observerRef.current) {\\n observerRef.current.disconnect();\\n }\\n };\\n }, []);\\n\\n return (\\n <div ...>\\n <div ref={targetRef}>\\n ...\\n </div>\\n </div>\\n );\\n}\\n\\n
We can also utilize useRef
in managing event handlers that should not be recreated on component re-renders. Here’s a quick example:
function ClickTracker() {\\n const clickHandlerRef = useRef(null);\\n const [count, setCount] = useState(0);\\n\\n useEffect(() => {\\n clickHandlerRef.current = () => {\\n setCount((prevCount) => prevCount + 1);\\n };\\n\\n window.addEventListener(\\"click\\", clickHandlerRef.current);\\n\\n return () => window.removeEventListener(\\"click\\", clickHandlerRef.current);\\n }, []);\\n\\n return <p>Clicks: {count}</p>;\\n}\\n\\n
As discussed, useRef
allows your React components to persist mutable values between re-renders. Let’s explore this with a simple example, where we persist the previous count value in a counter component with useRef
:
function Counter() {\\n const [count, setCount] = useState(0);\\n const prevCountRef = useRef(null); \\n\\n const updateCount = (amount) => {\\n setCount((currentCount) => currentCount + amount);\\n };\\n\\n useEffect(() => {\\n // Set the state value as current reference value\\n prevCountRef.current = count; \\n }, [count]);\\n\\n return (\\n <>\\n <p>Current count: {count}</p>\\n {prevCountRef.current !== null && (\\n <p>Previous count: {prevCountRef.current}</p>\\n )}\\n <button onClick={() => updateCount(1)}>+1</button>\\n <button onClick={() => updateCount(5)}>+5</button>\\n </>\\n );\\n}\\n\\n
See the Pen
\\nEvent listeners with useRef by Rahul (@c99rahul)
\\non CodePen.
Note that re-renders in the above example are triggered by changes in state and not in the useRef
reference value. Without state variations, prevCountRef
‘s value would update, but these updates would no longer reflect in the UI.
useRef
We now know the basics of useRef
and the areas where it should be applied. Let’s also discuss some general scenarios where we should avoid it and consider better-suited alternatives.
Consider using useState
over useRef
in cases where a change in the value must trigger the component to re-render:
// ❌ Avoid useRef for values that should trigger UI updates\\nconst countRef = useRef(0);\\n\\n// ✅ Utilize useState instead for such cases\\nconst [count, setCount] = useState(0);\\n
Avoid implementing useRef
to store values that are not expected to change at all. In such cases, consider using a JavaScript const
variable outside the render instead. This is shown below:
function Component() {\\n// ❌ Avoid storing an immutable value with useRef\\nconst piRef = useRef(3.14);\\n\\n// ✅ Use the JavaScript const instead\\nconst pi = 3.14;\\n\\nreturn ({ /*...*/ })\\n}\\n
When managing event handlers with useRef
, make sure to examine whether or not your event handler depends on the state or any of the component props.
Let’s say the event handler depends on a prop or state variable. Instead of useRef
, consider using the useCallback
Hook for the event handler function to memoize the logic (which updates only when a dependency changes):
function Component() {\\n const [count, setCount] = useState(0);\\n const handleClick = useCallback(() => {\\n setCount((prev) => prev + 1);\\n }, [count]); // Handler depends on `count`\\n\\n useEffect(() => {\\n window.addEventListener(\\"click\\", handleClick);\\n return () => window.removeEventListener(\\"click\\", handleClick);\\n }, [handleClick]); // `handleClick` only changes when `count` changes\\n\\n return <p>{count}</p>;\\n}\\n\\n
This approach keeps the event-handling task in sync with the dependencies. It also improves the performance by preventing the recreation of event handler logic on every render.
\\nReact’s synthetic event system is capable of handling commonly used events intrinsically, which means you don’t necessarily have to handle those events with useRef
.
Therefore, consider sticking to declarative event handling to manage events whenever possible. This avoids the unnecessary hassle of managing a reference for the event handler, attaching it to an event listener, and removing the listener at cleanup:
\\nfunction Click() {\\n // ❌ Avoid useRef to handle general events\\n const buttonRef = useRef(null);\\n\\n useEffect(() => {\\n const handleClick = () => console.log(\\"Clicked!\\");\\n buttonRef.current?.addEventListener(\\"click\\", handleClick);\\n\\n return () => buttonRef.current?.removeEventListener(\\"click\\", handleClick);\\n }, []);\\n\\n return <button ref={buttonRef}>Click</button>;\\n\\n // ✅ Use the declarative syntax to manage events\\n const handleClick = () => console.log(\\"Clicked!\\");\\n return <button onClick={clickHandler}>Count</button>;\\n}\\n\\n
useRef
best practicesWhen working with the useRef
Hook, there are some common mistakes developers commit that may cause bugs and cost them time debugging apps. Let’s look at those mistakes and also learn about corrective measures to fix such problems.
Always remember to specify the current property when accessing or modifying a value referenced with the useRef
hook. While it may seem like a small tip at first, it can save you from major headaches caused by hard-to-catch bugs in a huge codebase:
function ToastNotification() {\\n const toastRef = useRef(null);\\n\\n useEffect(() => {\\n toastRef.show(); // ❌ Missing the current property\\n toastRef.current.show(); // ✅ Always use the current property\\n }, []);\\n\\n return <div ref={toastRef}>...</div>;\\n}\\n\\n
A value or a DOM reference might not exist initially before the first render. In that case, initialize your refs with a null or meaningful default value to avoid potential errors:
\\nfunction ClickLogger() {\\n // Set a meaningful default value when initializing\\n const countRef = useRef(0);\\n\\n // Increment count on click\\n const handleClick = () => {\\n countRef.current += 1;\\n console.log(`Clicked ${countRef.current} times`);\\n };\\n\\n return <button onClick={handleClick}>Click me</button>;\\n}\\n\\n
In cases where a useRef
reference can be null, always add a conditional check before accessing or modifying it to avoid reference errors.
Note that the returned useRef
object will never be null. Therefore you should strictly check the reference value for a null value and not the object:
function ImageLoader() {\\n const imgRef = useRef(null); \\n\\n useEffect(() => {\\n const loadImage = async () => {\\n const imgSrc = await getImgSrc();\\n\\n // Check if a reference value exists\\n if (imgRef.current) {\\n imgRef.current.src = imgSrc;\\n }\\n };\\n\\n loadImage();\\n }, []);\\n\\n return <img ref={imgRef} alt=\\"Loading...\\" />;\\n} \\n\\n
Always remember to clear timeouts and disconnect observers during component clean-up to avoid any memory leaks in your apps.
\\nThe same applies to event listeners too, even though most event listeners are taken care of during garbage collection by browsers. Whether you are managing an event listener with useRef
or not, you should detach all the attached event listeners as well when cleaning up a component.
Instead of complicating DOM selections and manipulations, use React’s declarative approach to keep things simple to understand and easy to maintain. While doing so, you also save your app from inconsistencies created by direct DOM manipulations in the virtual DOM updates:
\\nfunction Message() {\\n\\n // ❌ Bad Practice: Direct DOM Manipulation\\n const divRef = useRef(null);\\n useEffect(() => {\\n divRef?.current.textContent =\\n \\"Text injected by direct DOM manipulation.\\";\\n }, []);\\n\\n // ✅ Good Practice: Using React State to support VDOM\\n const [message, setMessage] = useState(\\"...\\");\\n useEffect(() => {\\n setMessage(\\"Text from a state variable.\\");\\n }, []);\\n\\n return (\\n <>\\n {/* Bad practice */}\\n <p ref={divRef} style={{ color: \\"red\\" }}>...</p>\\n\\n {/* Good practice */}\\n {message && <p style={{ color: \\"green\\" }}>{message}</p>}\\n </>\\n );\\n}\\n\\n
useRef
with forwardRef
to pass refs from parent to childNormally, we use the useRef
Hook to reference elements within components. If you want to use reference of an element (or a value) from a child component in a parent component, you should pair useRef
with forwardRef
, another Hook that forwards the reference to an element in the child to any parent accessing the child:
// Forward ref to the input element for parent access\\nconst ChildInput = forwardRef((props, ref) => {\\n return <input ref={ref} {...props} />;\\n});\\n\\nfunction ParentComponent() {\\n // Create a ref for ChildInput\\n const inputRef = useRef(null);\\n\\n const focusInput = () => {\\n if (inputRef.current) {\\n inputRef.current.focus(); // ✅ Parent can now access child\'s input\\n }\\n };\\n\\n return (\\n <div>\\n {/* Pass the ref to ChildInput for parent access */}\\n <ChildInput ref={inputRef} placeholder=\\"Type here...\\" />\\n <button onClick={focusInput}>Focus Input</button>\\n </div>\\n );\\n}\\n\\n
If you are interested in learning more about different types of refs, this complete guide to React refs is a must-see.
\\nWith React 19 onwards, instead of using forwardRef
, pass ref
as a component prop and use it with the specified element. Read more about this change in the React documentation.
useRef
Let’s touch on some patterns in React that utilize useRef
to incorporate different features, while also using other React hooks like useState
, useEffect
, and more.
I’m decorating some of these examples with Tailwind CSS, which is completely optional. The rest of the process remains focused on utilizing useRef
along with other React features to foster common use cases.
Let’s manage a form to generate a purchase receipt with useRef
.
The form contains multiple inputs whose reference is managed with individual useRef
objects that collectively help us grab the form data in a state variable. We can put all these reference objects in one parent object for better organization, as shown below:
function ReceiptGenerator() {\\n // Create refs for receipt form fields\\n const formRefs = {\\n customerName: useRef(null),\\n itemName: useRef(null),\\n quantity: useRef(null),\\n price: useRef(null),\\n };\\n\\n // State to store receipt data\\n const [receipt, setReceipt] = useState(null);\\n\\n const handleSubmit = (e) => {\\n e.preventDefault();\\n\\n // Calculate total\\n const quantity = parseFloat(formRefs.quantity.current.value);\\n const price = parseFloat(formRefs.price.current.value);\\n const total = quantity * price;\\n\\n const data = {\\n quantity,\\n price,\\n total,\\n };\\n\\n // Update state with receipt data\\n setReceipt(data);\\n\\n // Reset the form\\n e.target.reset();\\n\\n // Focus back on the first field\\n formRefs.customerName.current.focus();\\n };\\n\\n return (\\n <div className=\\"...\\">\\n { \\n /* Structure the form and render \\n * receipt based on the data received\\n * on form submission.\\n */\\n }\\n </div>\\n );\\n}\\n\\n
In the above code, we are defining an event handler for the form submission, getting the form data using the input field references held with useRef
, storing it in the receipt
state variable, and then using this data to generate a purchase receipt. Here’s a working example of the same:
See the Pen
\\nReceipt Generator by Rahul (@c99rahul)
\\non CodePen.
It’s not complicated to apply dynamic animations to an element using useRef
. You may use requestAnimationFrames
to do so, or use a JavaScript animation library such as GSAP, Motion, or AnimeJS.
The GSAP library is pretty common these days, so let’s quickly create an animated card component with it and useRef
:
import gsap from \\"gsap\\";\\n\\nconst AnimatedCard = () => {\\n const cardRef = useRef(null);\\n const animationRef = useRef(null);\\n\\n const animateCard = () => {\\n // Kill previous animation if exists\\n if (animationRef.current) animationRef.current.kill(); \\n\\n animationRef.current = gsap.fromTo(\\n cardRef.current,\\n { scale: 0.8, opacity: 0, rotate: -10 },\\n { scale: 1, opacity: 1, rotate: 0, duration: 0.8, ease: \\"power2.out\\" }\\n );\\n };\\n\\n useEffect(() => {\\n animateCard(); // Run animation on mount\\n }, []);\\n\\n return (\\n <div style={ /* CSS Styles */ }>\\n <div ref={cardRef} style={ /* CSS Styles */ }>Animated Card</div>\\n </div>\\n );\\n}\\n\\n
After styling the card with Tailwind CSS and adding some more content to it, here’s what the final outcome looks like:
\\nSee the Pen
\\nAnimated GSAP card x TWCSS by Rahul (@c99rahul)
\\non CodePen.
The card animation plays automatically upon entering the page. Try this demo in a separate tab or use the “Animate again” button to replay the animation.
\\nLet’s say you have a component that allows the user to set a nickname for their account. This component also shows their previous nickname without making a trip to the network to access a value from the database.
\\nWe can accomplish this using two state variables. However, we can do the same with just one state variable and maintain the previous state value using the useRef
Hook.
I’m following nearly the same pattern as we followed when learning to store a previous state value with useRef
. Here’s how it turned out:
See the Pen
\\nSaving last value with useRef and useState (Simplified) by Rahul (@c99rahul)
\\non CodePen.
Using the DOM reference provided by useRef
, we can easily check if it contains the clicked target. Here’s what the implementation would look like:
function TrackClicks() {\\n const [message, setMessage] = useState(\\"Click somewhere!\\");\\n const drawerRef = useRef(null);\\n\\n function handleClick(event) {\\n if (drawerRef.current && drawerRef.current.contains(event.target)) {\\n setMessage(\\"Clicked inside!\\");\\n } else {\\n setMessage(\\"Clicked outside!\\");\\n }\\n }\\n\\n return (\\n <div className=\\"...\\" onClick={handleClick}>\\n <div ref={drawerRef} className=\\"...\\">{message}</div>\\n </div>\\n );\\n}\\n\\n
You can see a working example of the above code here.
\\nExpanding on this baseline, we can put together a drawer component that shows up on a button click and disappears when clicked outside of itself:
\\nSee the Pen
\\nDrawer with useRef by Rahul (@c99rahul)
\\non CodePen.
Suppose your app demands automatic focusing on an input field after a certain event, such as immediately after the app finishes loading in the browser window, a button click, etc. In such a case, we can easily hook that input field with a useRef
Hook and set focus to it using a side-effect:
export default function AutoFocusInput() {\\n const inputRef = useRef(null);\\n\\n useEffect(() => {\\n if (inputRef.current) {\\n inputRef.current.focus();\\n }\\n }, []);\\n\\n return <input ref={inputRef} type=\\"text\\" placeholder=\\"Type here...\\" />;\\n}\\n\\n
Let’s use this approach in the above-implemented Drawer
component and focus the search box in the drawer as soon as the drawer is clicked open:
See the Pen
\\nuseRef Input Focus by Rahul (@c99rahul)
\\non CodePen.
If you notice this example closely, I’m forwarding a ref
from a child component (InputBox
) to a parent component (Drawer
), following one of the best practices we discussed previously.
To wrap up, we learned about the useRef
Hook in React, discussed its implementation, applications, and some do’s and don’ts. You may find all the examples discussed in this post with some bonus demos in this CodePen collection.
Try implementing the useRef
Hook in your apps if you haven’t already. Share your questions or suggestions in the comments. I’ll be happy to learn your thoughts and help you.
Would you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nuseFonts
Hook\\n react-native-asset
errors\\n Fonts are the building blocks of a great user experience. Using custom fonts can provide your apps with a unique identity, helping your project stand out in a competitive marketplace.
\\nIn this guide, we will explore modern ways to add custom fonts in a React Native app, including Google Fonts integration. To follow along, you should be familiar with the basics of React Native or the Expo SDK, including JSX, components (class and functional), and styling. You can also follow the GitHub repositories for this project to see implementations for both the React Native CLI and Expo.
\\nassets/fonts
folder and add your font filesreact-native.config.js
file to specify the font assets pathnpx react-native-asset
to link the fontsfontFamily
in your stylesnpx expo install expo-font
npx expo install @expo-google-fonts/[font-name]
useFonts
Hook to load your fontsEditor’s note: This article was updated by Timonwa Akintokun in April 2025 to align the the content with latest best practices for React Native (0.73+) and Expo SDK 50+, replace outdated font linking methods, update Google Fonts integration, enhance dynamic font loading best practices, and introduce system font recommendations for better UX.
\\nFor our project, we will add custom fonts to a React Native CLI project by building a basic application using Google Fonts. Google Fonts is a library of free, open source fonts that can be used while designing web and mobile applications.
\\nTo bootstrap the React Native CLI project, run the following command in your terminal:
\\nnpx @react-native-community/cli@latest init CustomFontCLI\\n\\n
CustomFontCLI
is the name of our project folder. Once the project has been successfully installed, you will see the project creation confirmation in your terminal:
Open your project in your preferred IDE to get started. In this tutorial, we will use VS Code.
\\nOnce the project has been bootstrapped, we will move on to getting the fonts we want to use. We’ll go over how to import them and use them in our project.
In this project, we will demonstrate custom font integration using two fonts: Quicksand and Raleway, which you can find on Google Fonts.
\\nFind your desired fonts in Google Fonts, select the styles you want (e.g., Light 300, Regular 400, select variable if you want to use the Variable font format), and click on the Download button:
The folder will be downloaded as a ZIP file with a font folder. Inside the folder, there is a static folder where all the TTF files reside. Copy and keep the TTF files.
\\nIn the next section, we will go through integrating these fonts’ TTF files into our React Native CLI project.
\\nCreate an assets
folder in the root directory of your project, with a subfolder called fonts
. Then, paste all the TTF files you copied from the static folder earlier into the fonts
folder of your project:
Next, create a react-native.config.js
file in the root directory and paste the code below inside it:
module.exports = {\\n project: {\\n ios: {},\\n android: {},\\n },\\n assets: [\'./assets/fonts\'],\\n};\\n\\n
Adjust the assets path according to your font directory. Also, make sure your file is named correctly.
\\nWe have successfully integrated the font files into our project. Now we need to link them so we’ll be able to use them in any files inside the project. To do that, run the following command:
\\nnpx react-native-asset\\n\\n
Once the assets have been successfully linked, you should see the following message in your terminal:
\\nThen, in your App.tsx
file, paste the following code:
import { StyleSheet, Text, View } from \\"react-native\\";\\nimport React from \\"react\\";\\n\\nconst App = () => {\\n return (\\n <View style={styles.container}>\\n <Text style={styles.quicksandRegular}>\\n This text uses a quick sand font\\n </Text>\\n <Text style={styles.quicksandLight}>\\n This text uses a quick sand light font\\n </Text>\\n <Text style={styles.ralewayThin}>\\n This text uses a thin italic raleway font\\n </Text>\\n <Text style={styles.ralewayItalic}>\\n This text uses a thin italic raleway font\\n </Text>\\n </View>\\n );\\n};\\nexport default App;\\n\\nconst styles = StyleSheet.create({\\n container: {\\n backgroundColor: \\"lavender\\",\\n flex: 1,\\n justifyContent: \\"center\\",\\n alignItems: \\"center\\",\\n },\\n quicksandLight: {\\n fontFamily: \\"Quicksand-Light\\",\\n fontSize: 20,\\n },\\n quicksandRegular: {\\n fontFamily: \\"Quicksand-Regular\\",\\n fontSize: 20,\\n },\\n ralewayItalic: {\\n fontFamily: \\"Raleway-Italic\\",\\n fontSize: 20,\\n },\\n ralewayThin: {\\n fontFamily: \\"Raleway-ThinItalic\\",\\n fontSize: 20,\\n },\\n});\\n\\n
This is a basic App.tsx
file with four texts being styled, each by different font styles of Raleway and Quicksand. Essentially, we are rendering the JSX with four texts to display on the screen and React Native’s StyleSheet API to append different fontFamily
styles to each of the Text
components.
Let’s see the output:
\\nIn this section, we will learn how to use custom fonts with Expo. Expo supports two font formats, OTF and TTF, which work consistently on iOS, Android, and the web. If you have your font in another format, you’ll need advanced configurations.
\\nFirst, create a new Expo project by running this command:
\\nnpx create-expo-app@latest my-app\\n\\n
Once the project has been successfully installed, start the development server by running npm run start
and choose either the iOS or Android option to open your project.
You should see the default Expo screen in your simulator or device:
\\nuseFonts
HookIn React Native with Expo, the useFonts
Hook is the recommended approach for loading and using custom fonts. It takes an object where the key is the name you want to use to reference the font, and the value is the required statement pointing to the font file.
The syntax looks like this:
\\nimport { useFonts } from \\"expo-font\\";\\n\\nconst [loaded, error] = useFonts({\\n FontName: require(\\"./path/to/font.ttf\\"),\\n});\\n\\n
In this section, we will see how to add Google Fonts to our application. The Expo team has created a set of packages that make it easy to use Google Fonts in your Expo project.
\\nTo add Google Fonts like Raleway and Quicksand, install these packages using the commands below:
\\nnpx expo install expo-font @expo-google-fonts/raleway @expo-google-fonts/quicksand\\n\\n
If you have other Google Fonts you want to use, you can check here for the available fonts with Expo support.
\\nIn your App.js
file, paste the following code block:
import { useFonts } from \\"expo-font\\";\\nimport { StatusBar } from \\"expo-status-bar\\";\\nimport { StyleSheet, Text, View, ActivityIndicator } from \\"react-native\\";\\nimport { Raleway_200ExtraLight } from \\"@expo-google-fonts/raleway\\";\\nimport { Quicksand_300Light } from \\"@expo-google-fonts/quicksand\\";\\n\\nexport default function App() {\\n const [fontsLoaded] = useFonts({\\n Raleway_200ExtraLight,\\n Quicksand_300Light,\\n });\\n if (!fontsLoaded) {\\n return (\\n <View style={styles.container}>\\n <ActivityIndicator size=\\"large\\" color=\\"#0000ff\\" />\\n <Text>Loading fonts...</Text>\\n </View>\\n );\\n }\\n\\n return (\\n <View style={styles.container}>\\n <Text>This text has default style</Text>\\n <Text style={styles.raleway}>This text uses Raleway Font</Text>\\n <Text style={styles.quicksand}>This text uses QuickSand Font</Text>\\n <StatusBar style=\\"auto\\" />\\n </View>\\n );\\n}\\n\\nconst styles = StyleSheet.create({\\n container: {\\n flex: 1,\\n backgroundColor: \\"#fff\\",\\n alignItems: \\"center\\",\\n justifyContent: \\"center\\",\\n },\\n raleway: {\\n fontSize: 20,\\n fontFamily: \\"Raleway_200ExtraLight\\",\\n },\\n quicksand: {\\n fontSize: 20,\\n fontFamily: \\"Quicksand_300Light\\",\\n },\\n});\\n\\n
Here, we imported Raleway_200ExtraLight
and Quicksand_300Light
from their respective packages. We use the useFonts
Hook to load these custom fonts asynchronously. The result from the useFonts
Hook is an array of Boolean values that was destructured using the syntax const [fontsLoaded]
to access the Boolean value that it returns.
If the fonts are successfully loaded, the result will be [true, null]
, which means fontsLoaded
is true.
We’ve added an ActivityIndicator
to provide visual feedback while the fonts are loading, which is a best practice to improve user experience.
Let’s see what this looks like in our simulator:
\\nLet’s say you have a personal React Native project you are building, and you have been given custom fonts that are not among the available Google Fonts supported by Expo.
\\nFirst, you will need to download the font
file into your project and install the expo-font
package. For this tutorial, I downloaded Space Mono from FontSquirrel as my custom font.
Create a folder called assets
and, within it, create a fonts
folder, just like you did with the React Native CLI. Then, move the font files from the fonts
folder into your project like so:
In your App.js
file, update the code to include the Space Mono custom font:
import { useFonts } from \\"expo-font\\";\\nimport { StatusBar } from \\"expo-status-bar\\";\\nimport { StyleSheet, Text, View, ActivityIndicator } from \\"react-native\\";\\nimport { Raleway_200ExtraLight } from \\"@expo-google-fonts/raleway\\";\\nimport { Quicksand_300Light } from \\"@expo-google-fonts/quicksand\\";\\n\\nexport default function App() {\\n const [fontsLoaded] = useFonts({\\n Raleway_200ExtraLight,\\n Quicksand_300Light,\\n SpaceMono: require(\\"../../assets/fonts/SpaceMono-Regular.ttf\\"),\\n });\\n\\n if (!fontsLoaded) {\\n return (\\n <View style={styles.container}>\\n <ActivityIndicator size=\\"large\\" color=\\"#0000ff\\" />\\n <Text>Loading fonts...</Text>\\n </View>\\n );\\n }\\n\\n return (\\n <View style={styles.container}>\\n <Text>This text has default style</Text>\\n <Text style={styles.raleway}>This text uses Raleway Font</Text>\\n <Text style={styles.quicksand}>This text uses QuickSand Font</Text>\\n <Text style={styles.spacemono}>This text uses Space Mono Font</Text>\\n <StatusBar style=\\"auto\\" />\\n </View>\\n );\\n}\\n\\nconst styles = StyleSheet.create({\\n container: {\\n flex: 1,\\n backgroundColor: \\"#fff\\",\\n alignItems: \\"center\\",\\n justifyContent: \\"center\\",\\n },\\n raleway: {\\n fontSize: 20,\\n fontFamily: \\"Raleway_200ExtraLight\\",\\n },\\n quicksand: {\\n fontSize: 20,\\n fontFamily: \\"Quicksand_300Light\\",\\n },\\n spacemono: {\\n fontSize: 20,\\n fontFamily: \\"SpaceMono\\",\\n },\\n});\\n\\n
Then, view the updates in the simulator:
\\nAs shown in the simulator output above, the additional text uses the SpaceMono
font family that has been used to style the fourth text.
These strategies will help you get the most out of React Native fonts
\\nWhile custom fonts provide unique branding, system fonts offer several advantages:
\\nTo use system fonts effectively, you can use Platform.select()
to provide platform-specific font families:
import { StatusBar } from \\"expo-status-bar\\";\\nimport { StyleSheet, Text, View, Platform } from \\"react-native\\";\\nexport default function App() {\\n return (\\n <View style={styles.container}>\\n <Text>This text has default style</Text>\\n <Text>This text no longer uses Raleway Font</Text>\\n <Text>This text no longer uses QuickSand Font</Text>\\n <Text>This text no longer uses Space Mono Font</Text>\\n <StatusBar style=\\"auto\\" />\\n </View>\\n );\\n}\\nconst styles = StyleSheet.create({\\n container: {\\n flex: 1,\\n backgroundColor: \\"#fff\\",\\n alignItems: \\"center\\",\\n justifyContent: \\"center\\",\\n },\\n text: {\\n fontFamily: Platform.select({\\n ios: \\"System\\",\\n android: \\"Roboto\\",\\n default: \\"System\\",\\n }),\\n // Optional: You can also switch font weights dynamically\\n // fontWeight: \\"400\\", // normal\\n fontWeight: \\"900\\", // black\\n },\\n});\\n\\n
This approach ensures that your text looks native on each platform while still maintaining control over the style:
\\n
Flash of invisible text (FOIT) occurs when your UI renders before fonts are loaded. To prevent this, always implement proper loading states like we did in the examples above.
\\nYou can also use a splash screen or skeleton UI to improve the user experience while fonts are loading.
\\nWhen implementing custom fonts, always ensure that they work well with the device accessibility settings. Users with visual impairments often increase the font size on their devices. You can support these preferences by using useWindowDimensions
:
import { StyleSheet, Text, useWindowDimensions } from \'react-native\';\\nexport default function AccessibleText() {\\n const { fontScale } = useWindowDimensions();\\n\\n return (\\n <Text style={[styles.text, { fontSize: 16 * fontScale }]}>\\n This text will respect the user\'s font size preferences\\n </Text>\\n );\\n}\\nconst styles = StyleSheet.create({\\n text: {\\n fontFamily: \'CustomFont-Regular\',\\n },\\n});\\n\\n
This approach ensures your app’s typography remains accessible regardless of the custom fonts you implement.
\\n\\nFont rendering can vary significantly across different devices, operating systems, and screen resolutions. Before releasing your app:
\\nThis thorough validation process helps identify font rendering inconsistencies early, ensuring your typography looks professional across all supported devices.
\\nWhen working with custom fonts in React Native, there might be some drawbacks you’ll encounter.
\\nThis happens when the UI or page is rendered before the fonts are loaded. This leads to fallback fonts appearing, i.e., the default font of your mobile device. To fix this issue, use conditional rendering to render the app only after the fonts are loaded. We can see this applied throughout this article.
\\nIf the fonts are not ready, render a loading screen or an ActivityIndicator
:
import { useFonts } from \\"expo-font\\";\\n\\nexport default function App() {\\n const [fontsLoaded] = useFonts({\\n SpaceMono: require(\\"../../assets/fonts/SpaceMono-Regular.ttf\\"),\\n });\\n\\n if (!fontsLoaded) {\\n return <Text>Loading fonts...</Text>;\\n }\\n\\n return (\\n <View style={styles.container}>\\n // application codes\\n </View>\\n );\\n}\\n\\n
As discussed in earlier sections, it’s crucial that font family names are consistent. For example, if you import a font as SourceCodePro-ExtraLight.otf
but then load it into the application under a different path or file name, such as /assets/fonts/SourceCodePro-ExtraLight.ttf
, this will cause the application to throw an error because there has been a fontFamily
name mismatch.
Using the wrong path for your font file will also cause the application to throw an error. Double-check the file structure and ensure the paths match the exact location of the font file. Place your fonts in a /assets/fonts/
folder for easy matches, like /assets/fonts/SourceCodePro-ExtraLight.ttf
.
When working with custom fonts, it’s important to verify that the system you’re working on (iOS, Android, or web) supports the font format you are using (e.g., .ttf, .otf). If not, unexpected errors may occur during development.
\\nWhen adding custom fonts to your React Native applications, be mindful of their file size (measured in kb/mb). Large font files can significantly increase an app’s loading time, especially when custom fonts are being loaded.
\\nMinimize the number of custom fonts you load by using a single font family with different weights (regular, bold, italic, etc.) rather than multiple unrelated ones or using a full zip folder.
\\nThis is when the fontWeight
or fontStyle
properties do not apply because the loaded font doesn’t support the variations (bold, italic, regular).
Let’s say you downloaded a zip file of SpaceMono font. It comes in different variations such as SpaceMono-Bold.ttf
, SpaceMono-Regular.ttf
, SpaceMono-Light.ttf
, and so on. If you need a bold weight of the font, then you need to use the SpaceMono-Bold.ttf
font or you will run into this issue. Most custom fonts are explicitly named with their weights or styles, so use the one you need to avoid this issue.
When building standalone apps (for Google PlayStore or Apple’s App Store), it is good to include the expo-font
plugin in app.json
. This is because the expo configuration helps ensure that Expo knows how to handle the fonts and bundle them properly. To do that, add the code below to your app.json
config file:
// app.json\\n\\n{\\n \\"expo\\": {\\n \\"plugins\\": [\\n [\\n \\"expo-font\\",\\n {\\n \\"fonts\\": [\\"./assets/fonts/Inter-Black.otf\\"] // your font\'s path\\n }\\n ]\\n ]\\n }\\n}\\n\\n
react-native-asset
errorsIf you use the React Native CLI, you might encounter some issues with the npx react-native-asset
command, like error Assets destination folder is not defined.
This usually occurs when your react-native.config.js
file is missing, is incorrectly configured, or is placed in the wrong location. To resolve this:
Ensure your react-native.config.js
file exists in the root directory of your project.
Then, verify it has the correct format:
\\nmodule.exports = {\\n project: {\\n ios: {},\\n android: {},\\n },\\n assets: [\'./assets/fonts\'],\\n};\\n\\n
Conclude by double-checking that the path to your fonts folder is correct.
\\nAnother common issue is when assets appear to be linked successfully, but the fonts are still not rendered. In this case:
\\nIntegrating custom fonts in React Native applications is not just a technical enhancement but a strategic approach to improving user experience. The modern approach using the useFonts
Hook for Expo projects and npx react-native-asset
for React Native CLI projects significantly simplifies the process compared to older methods.
Remember these key takeaways:
\\nuseFonts
Hook from expo-fontreact-native.config.js
file and npx react-native-asset
By following these best practices, you’ll be able to integrate custom fonts seamlessly while maintaining optimal performance in your React Native applications.
\\nCheck out the GitHub repo for this project using the React Native CLI and Expo.
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nRender-blocking resources are CSS stylesheets and JavaScript files that block the first paint of your page. To eliminate them, consider the following to improve page speed and boost SEO:
\\nWhen a user loads your webpage, the browser must process various resources, including CSS and JavaScript, before rendering any content on the screen. Some of these resources can block the first paint of your page, thus impacting its initial page load. In this article, we’ll explore how to identify and eliminate render-blocking resources using Lighthouse.
\\nRender-blocking resources are CSS stylesheets and JavaScript files that block the first paint of your page. When the browser encounters a render-blocking resource, it stops downloading the rest of the resources until these critical files are processed. In the meantime, the entire rendering process is put on hold.
\\nResources considered as render blocking include:
\\n<head>
of the documentdefer
or async
attributedisabled
attribute, thus requiring the stylesheet to be downloadedmedia
attribute specific to the user’s deviceEliminating render-blocking resources is crucial for improving Google’s Core Web Vitals performance metrics. The web vitals metrics impact search rankings and user experience. Metrics like First Contentful Paint (FCP) and Largest Contentful Paint (LCP) are particularly sensitive to delays caused by render-blocking CSS and JavaScript.
\\nEditor’s note: This article was updated by Ivy Walobwa in April 2025 to include the most up-to-date strategies for the quick elimination of render-blocking resources in both CSS and JavaScript, and provide updated tools and real-world examples.
\\nSeveral free performance metrics tools allow you to identify and analyze render-blocking resources. Choosing the right tool depends on factors such as the kind of data you want to use, whether field or lab data. Lab data is collected in a controlled environment and is used for debugging issues. Field data is used for capturing real-user experience, but with a more limited set of metrics.
\\nThe most common tools include:
\\nWhen running tests on these tools, you’ll often find that the metrics reported don’t match up exactly. Each tool has differences in hardware, connection speed, locations, screen resolutions, and test methodology. We’ll use Lighthouse to improve the performance of a site with different speeds on mobile and desktop views.
\\nAfter running an audit on Lighthouse, you’ll see suggestions to improve site performance, such as Eliminate render-blocking resources
— as shown in the image below:
The following sections will dive into how to eliminate the render-blocking resources by optimizing CSS and JS delivery.
\\nTo eliminate render-blocking resources, it’s essential to identify which resources are needed to render the critical part of your page: above-the-fold content. Critical resources are necessary for rendering the first paint of your page, while non-critical resources apply to content that is not immediately visible. Non-critical resources can be deferred or loaded asynchronously to improve performance.
\\nThe Coverage tab on Chrome DevTools allows you to visualize critical and non-critical CSS and JS. It shows you how much code was loaded and how much is unused. In the image below, the red marking shows non-critical code while the grey marking shows critical code:
You can click on the URL to better look at the critical and non-critical lines of code and optimize their delivery:
\\nEfficient handling of critical CSS is essential for improving page load performance and reducing render-blocking resources. Some CSS optimization techniques include inlining critical CSS, deferring non-critical CSS, and removing unused CSS.
\\nCritical styles required for the first paint are added to a <style>
block in the <head>
tag. Click on a CSS resource on the Coverage tab to see the critical and non-critical styles. The styles marked in grey are extracted and put in the <head>
tag of the page:
<head>\\n...\\n<style>\\n...\\n.grid {\\n display: grid;\\n grid-template-columns: repeat(auto-fit, minmax(250px, 1fr));\\n grid-gap: 20px;\\n}\\n...\\n</styles>\\n</head>\\n\\n
The non-critical CSS, marked in red, is then loaded asynchronously using the preload
link or using CSS media types and media queries.
We add the link rel=\\"preload\\" as=\\"style\\"
attributes to request the stylesheet asynchronously, then add the onload
attribute to process the CSS when the stylesheet finishes loading. For browsers that don’t execute JavaScript, we add the stylesheet inside the noscript
element:
<link rel=\\"preload\\" href=\\"styles.css\\" as=\\"style\\" onload=\\"this.onload=null;this.rel=\'stylesheet\'\\">\\n<noscript><link rel=\\"stylesheet\\" href=\\"styles.css\\"></noscript>\\n\\n
To utilize CSS media types and media queries, we add the media
attribute with print
. Stylesheets that are declared in this format are applied when the page is being printed and are loaded with low priority; hence, they are not marked as render-blocking. The onload
attribute is used to further load the styles dynamically based on screen size:
<link rel=\\"stylesheet\\" href=\\"css/style.css\\" media=\\"print\\" onload=\\"this.media=\'all\'\\">\\n\\n
The process of extracting and inlining critical CSS and deferring non-critical CSS can be automated with tools such as Critical.
\\nBased on configurations added when using Critical, it can extract critical CSS for you and add the styles to your document head. It also loads the remaining stylesheet asynchronously using CSS media types and media queries:
\\n<style>\\n /* inline critical CSS */\\n</style>\\n <link href=\\"https://fonts.googleapis.com/css2?family=Poppins:wght@300;400;700&display=swap\\" rel=\\"stylesheet\\" media=\\"print\\" onload=\\"this.media=\'all\'\\">\\n <link rel=\\"stylesheet\\" href=\\"lib/swiper.css\\" media=\\"print\\" onload=\\"this.media=\'all\'\\">\\n <link rel=\\"stylesheet\\" href=\\"lib/fontawesome.css\\" media=\\"print\\" onload=\\"this.media=\'all\'\\">\\n <link rel=\\"stylesheet\\" href=\\"css/style.css\\" media=\\"print\\" onload=\\"this.media=\'all\'\\">\\n <link rel=\\"stylesheet\\" href=\\"css/spinner.css\\" media=\\"print\\" onload=\\"this.media=\'all\'\\">\\n\\n
It’s good practice to add screen-specific styles to separate style sheets and load them dynamically based on the screen size:
\\n<link rel=\\"stylesheet\\" href=\\"mobile.css\\" media=\\"(max-width: 768px)\\" onload=\\"this.media=\'all\'\\">\\n\\n
Applying the techniques above removes render-blocking CSS and improves the page performance significantly, from 31% to 41%:
\\nFrom the Coverage tab on Chrome DevTools, you can identify unused styles and manually remove them.
\\nYou can also use tools like PurgeCSS that check your CSS code and remove any unused selectors from it. This is useful, especially when using third-party libraries such as Bootstrap and Font-awesome.
\\nTo further improve the performance of your page, you can make use of CSS containment. CSS containment allows the browser to isolate a subtree of the page from the rest of the page. This is essential to fix performance issues such as layout shifts:
\\narticle {\\n// the strict value is a shorthand for applying all the containment types (size, layout, style, and paint) to the element.\\n contain: strict;\\n}\\n\\n
To eliminate render-blocking JavaScript, use the defer
or async
attributes in your <script>
tag, especially with third-party scripts. This will ensure that your JavaScript code is loaded asynchronously while your HTML is being parsed.
The async
attribute downloads the scripts asynchronously while HTML parses and executes the scripts as soon as they are downloaded. This can potentially interrupt HTML parsing. The defer
attribute downloads scripts asynchronously but defers their execution until HTML parsing is completed:
<script async src=\\"bundle.js\\"></script>\\n<script defer src=\\"bundle.js\\"></script>\\n<script async defer src=\\"bundle.js\\"></script>\\n\\n
Lazy loading and code splitting can be implemented to optimize your app’s performance when loading JavaScript modules.
\\n\\nLazy loading involves loading JavaScript modules only when needed, thus removing the need to load all scripts on first paint. Code splitting involves breaking down a large script bundle into smaller chunks that are loaded on demand. Both these approaches reduce the time taken for the first page render. You can follow our guides on lazy loading and code splitting to learn how to implement them and the best practices.
\\nLoading third-party scripts can slow down your page performance. Some of the fixes to try to improve performance include:
\\nasync
or defer
attributesYou can explore the JavaScript optimization tricks mentioned above and more from this article.
\\nFor websites built with CMS, plugins offer a convenient way to optimize and manage render-blocking resources without extensive manual coding. Here are some examples of plugins available for different CMS platforms:
\\nThese plugins can optimize various aspects of your website, including aggregating, minifying, and deferring resources, as well as improving caching mechanisms and enhancing overall performance.
\\nTo use these plugins effectively:
\\nTo improve performance when using third-party style libraries like Font Awesome, you can consider these tricks:
\\nTo optimize Google Fonts loading, you can host them locally. You will need to download the font, upload the font to your project or server, and then use @font-face
to reference the font in your CSS. To further improve speed, use <link rel=\\"preload\\">
to load fonts asynchronously.
If you’re not using plugins, you can manually optimize your site by:
\\nasync
and defer
attributesSome of the best WordPress plugins to eliminate render-blocking resources include:
\\nTo eliminate render-blocking resources in a React application, consider the following approaches:
\\nlazy()
and Suspense
to load components only when neededimport()
with Webpack) to defer loading of non-critical scriptsasync
or defer
attributesWhile these strategies can help you eliminate render-blocking resources, thus improving web performance, it’s important to remember that some challenges may arise while implementing them. Here are some considerations to keep in mind:
\\nSome techniques, like deferring non-critical CSS or asynchronously loading JavaScript, may be difficult to apply with third-party integrations. As it can be challenging to determine which resources are critical, always prioritize testing and monitoring to ensure everything on your website still works as expected.
\\nWhen it comes to code splitting or removing unused code, a comprehensive understanding of the website’s technology stack and development environment is essential. This knowledge helps you avoid the accidental removal of critical code, ensuring the website’s functionality remains intact.
\\nAs websites often rely on third-party scripts, stylesheets, or services for functionality like analytics, advertising, or social media integration, the website’s ability to load or minify third-party resources may be limited by the requirements or limitations imposed by these external dependencies.
\\n\\nOptimizing web performance by eliminating render-blocking resources is a task that requires time, expertise, and development effort. However, it’s important to remember to balance these efforts with other development tasks. This ensures a holistic approach to website development, where all aspects are given due attention.
\\nEliminating render-blocking resources may involve trade-offs or considerations that need to be carefully evaluated. For example, deferring non-critical CSS may improve initial page load times but could impact the perceived performance or user experience. Understanding these trade-offs is essential for making informed decisions.
\\n\\nEliminating render-blocking resources is just one step to improving the performance of your site. However, to achieve optimal speed and a seamless user experience, consider implementing lazy loading for images, minifying and compressing assets, leveraging a content delivery network (CDN), and optimizing JavaScript execution.
\\nRegularly auditing your site with tools like Google Lighthouse can help identify new bottlenecks and guide further improvements.
\\nFor an in-depth guide to browser rendering, check out “How browser rendering works — behind the scenes.”
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nMirage JS is an API mocking library that helps frontend developers simulate complex backend behavior without a real server. This article explores how to mock relational data models, JWT authentication, and role-based access control using Mirage’s ORM, serializers, factories, and route handlers. You’ll also learn how to seed user data and build realistic mock APIs to test features like user roles, permissions, and loading states.
\\nAPI integration in frontend applications today goes beyond the simple GET, POST, and PUT requests. Most frontend applications integrate APIs for authentication, role-based permissions, pagination, and other advanced features with complex API relationships.
\\nHowever, relying on a real backend to test these features can be slow and unreliable, especially in the early stages of development. While the mock responses for some of these features can be hard-coded, this doesn’t scale and often isn’t sufficient for testing features effectively.
\\nMirage JS is an API mocking library that helps simulate real-world backend complexity with its support for one-to-many and many-to-many relationships, mimicking real database operations. In this tutorial, we’ll use Mirage to explore mocking complex relationship APIs, mocking JWT authentication, and learn how to use Mirage factories to mock multiple server states, including:
\\nBefore following this tutorial, you should have:
\\nRun the following commands to add Mirage to your project:
\\n# With npm\\nnpm install --save-dev miragejs\\n\\n# With Yarn\\nyarn add --dev miragejs\\n\\n
Mirage lets you fake a backend server with API responses using route handlers, which are just JavaScript functions that return a response data or object. With createServer()
and route handlers, you can create your mock API server.
Create a mirage/books.js
file in your project’s src
folder and add the following:
//mirage/books.js\\nimport { createServer } from \\"miragejs\\"\\n\\nexport function makeServer() {\\n createServer({\\n routes() {\\n this.namespace = \\"api\\"\\n\\n\\n this.get(\\"/books\\", () => {\\n return {\\n books: [\\n { id: 1, title: \\"Think Big\\", author: \\"Ben Carson\\" },\\n { id: 2, title: \\"Rich Dad\\", author: \\"Robert Kiyosaki\\" },\\n { id: 3, title: \\"Things fall apart\\", author: \\"Chinua Achebe\\" },\\n ],\\n }\\n })\\n },\\n })\\n}\\n\\n
Next, run the Mirage server in your project’s entry file as follows:
\\n...\\nimport { makeServer } from \'./mirage/books.js\'\\nmakeServer();\\ncreateRoot(document.getElementById(\'root\')).render(\\n <StrictMode>\\n <App />\\n </StrictMode>,\\n)\\n\\n
Now, if your app makes a GET request to api/books
, Mirage will respond with the books
array.
When building modern web applications, working with relational data and handling complex API relationships is inevitable. To better understand Mirage’s mocking utilities and their use cases, we’ll get hands-on practice building a real-life application.
\\nWe’ll build a book app called BookVault — a platform for book reviews and discussions with authentication and role-based access control. To achieve this, we’ll create a fully functional mock API using Mirage JS, modeling key relationships like users, books, authors, categories, and reviews.
\\nThe mock API will handle authentication and user roles (admin
, editor
, and user
), ensuring proper role-based access control. Admins can add books, authors, and categories, while users can submit reviews. The API will also provide endpoints for fetching books, categories, authors, and user profiles, simulating a real-world backend environment.
Here is what the complete BookVault application will look like:
\\nWe’ll mainly focus on mocking the API endpoints with Mirage; you can follow the complete source code here.
\\nMocking API endpoints with complex relationships is always tricky. Fortunately, Mirage has a built-in ORM to mock relationships of any complexity.
\\nHere is the diagram for the models and relationship of the BookVault app:
\\nThis models the relationships between users, books, authors, categories, and reviews. A User
can write multiple Review
s, while each Review
belongs to a single User
and a specific Book
. A Book
belongs to one Author
and one Category
but can have multiple Review
s. Meanwhile, an Author
can write multiple Book
s, and a Category
can contain multiple Book
s.
You can declare this relationship in your models as follows:
\\n//mirage/books.js\\nimport { createServer, hasMany, belongsTo, Model } from \\"miragejs\\"\\n\\nexport function makeServer() {\\n createServer({\\n models: {\\n user: Model.extend({\\n reviews: hasMany(),\\n }),\\n book: Model.extend({\\n author: belongsTo(),\\n category: belongsTo(),\\n reviews: hasMany(),\\n }),\\n author: Model.extend({\\n books: hasMany(),\\n }),\\n category: Model.extend({\\n books: hasMany(),\\n }),\\n review: Model.extend({\\n user: belongsTo(),\\n book: belongsTo(),\\n }),\\n },\\n })\\n}\\n\\n
With this setup, Mirage knows about the relationship between these models.
\\nMost complex API endpoints return nested relational data. Mirage provides a serializer layer that we’ll use to transform our response data to include related data from different models.
\\nImport RestSerializer
and configure serializers for the Book and Review models. The book
serializer should embed its related models (Author
, Category
, and Reviews
) whenever book data is returned. Similarly, the review
serializer should embed the User
model since reviews are linked to users:
import { createServer, hasMany, belongsTo, RestSerializer } from \\"miragejs\\"\\n\\nexport function makeServer() {\\n createServer({\\n serializers: {\\n book: RestSerializer.extend({\\n include: [\\"author\\", \\"category\\", \\"reviews\\"],\\n embed: true,\\n }),\\n review: RestSerializer.extend({\\n include: [\\"user\\"],\\n embed: true,\\n }),\\n },\\n })\\n}\\n\\n
Now, the GET request to /api/books
will return each book’s Author
embedded alongside it, like this:
// GET /api/books\\n{\\n \\"books\\": [\\n {\\n \\"id\\": \\"1\\",\\n \\"title\\": \\"Think Big\\",\\n \\"author\\": { \\"name\\": \\"Ben Carson\\", \\"id\\": \\"1\\" }\\n },\\n {\\n \\"id\\": \\"2\\",\\n \\"title\\": \\"Things fall apart\\",\\n \\"author\\": { \\"name\\": \\"Chinua Achebe\\", \\"id\\": \\"2\\" }\\n }\\n ]\\n}\\n\\n
Mirage provides the seeds
hook to seed its database with some initial data once the server is started.
Let’s pre-populate Mirage’s database with sample data for our models:
\\nimport { createServer } from \\"miragejs\\"\\n\\nexport function makeServer() {\\n createServer({\\n seeds(server) {\\n server.create(\\"user\\", {\\n username: \\"admin\\",\\n email: \\"[email protected]\\",\\n password: \\"password\\",\\n role: \\"admin\\",\\n });\\n server.create(\\"user\\", {\\n username: \\"editor\\",\\n email: \\"[email protected]\\",\\n password: \\"password\\",\\n role: \\"editor\\",\\n });\\n let user = server.create(\\"user\\", {\\n username: \\"reader\\",\\n email: \\"[email protected]\\",\\n password: \\"password\\",\\n role: \\"user\\",\\n });\\n\\n let fiction = server.create(\\"category\\", { name: \\"Fiction\\" });\\n let author = server.create(\\"author\\", { name: \\"J.K. Rowling\\" });\\n let author1 = server.create(\\"author\\", { name: \\"P.J. Jones\\" });\\n\\n let book1 = server.create(\\"book\\", {\\n title: \\"Harry Potter\\",\\n author,\\n category: fiction,\\n });\\n let book2 = server.create(\\"book\\", {\\n title: \\"Fantastic Beasts\\",\\n author: author1,\\n category: fiction,\\n });\\n\\n server.create(\\"review\\", { content: \\"Amazing book!\\", user, book: book1 });\\n server.create(\\"review\\", { content: \\"Nice read!\\", user, book: book2 });\\n },\\n })\\n}\\n\\n
The seed hook creates a user with the role of user
and then adds a fiction
category. Two authors, J.K. Rowling
and P.J. Jones
, are created, followed by two books, Harry Potter
and Fantastic Beasts
, each assigned to an author and categorized under fiction
. Lastly, the hook creates reviews for both books, linking them to the previously created user.
Notice how we had to seed every single data for each model in the previous section. Imagine doing the same for hundreds of data points for each model — it would be pretty tedious!
\\nFortunately, Mirage also includes the factory
hook to simplify Mirage’s database seeding with some relational data once the server is started.
We can create a factory for our user
model like this:
factories: {\\n user: Factory.extend({...})\\n}\\n\\n
Here is a factory implementation for the seeding logic covered in the previous section:
\\nimport { createServer, Factory, association } from \\"miragejs\\"\\n\\nexport function makeServer() {\\n createServer({\\n factories: {\\n user: Factory.extend({\\n username(i) {\\n return [\\"admin\\", \\"editor\\", \\"reader\\"][i];\\n },\\n email(i) {\\n return [\\"[email protected]\\", \\"[email protected]\\", \\"[email protected]\\"][i];\\n },\\n password: \\"password\\",\\n role(i) {\\n return [\\"admin\\", \\"editor\\", \\"user\\"][i];\\n },\\n }),\\n category: Factory.extend({\\n name(i) {\\n return `Fiction ${i}`;\\n },\\n }),\\n author: Factory.extend({\\n name(i) {\\n return [\\"J.K. Rowling\\", \\"P.J. Jones\\"][i];\\n },\\n }),\\n book: Factory.extend({\\n title(i) {\\n return [\\"Harry Potter\\", \\"Fantastic Beasts\\"][i];\\n },\\n author(i) {\\n return association(\\"author\\", i);\\n },\\n category() {\\n return association(\\"category\\");\\n },\\n }),\\n review: Factory.extend({\\n content(i) {\\n return [\\"Amazing book!\\", \\"Nice read!\\"][i];\\n },\\n user() {\\n return association(\\"user\\", 2); \\n },\\n book(i) {\\n return association(\\"book\\", i);\\n },\\n }),\\n },\\n })\\n}\\n\\n
Each factory defines how Mirage should dynamically generate randomized, structured data.
\\nMirage’s association
function is used to link related models. The i
parameter allows indexing for dynamic data. It is used to generate unique values.
Now, we can use the createList
method to generate three users with a few lines of code:
seeds(server) {\\n server.createList(\\"user\\", 3);\\n}\\n\\n
Here is a refactor of the seeding logic covered in the previous section using the factory:
\\nexport function makeServer() {\\n createServer({\\n seeds(server) {\\n const users = server.createList(\\"user\\", 3);\\n const fiction = server.create(\\"category\\");\\n const authors = server.createList(\\"author\\", 2);\\n const books = [\\n server.create(\\"book\\", {\\n title: \\"Harry Potter\\",\\n author: authors[0],\\n category: fiction,\\n }),\\n server.create(\\"book\\", {\\n title: \\"Fantastic Beasts\\",\\n author: authors[1],\\n category: fiction,\\n }),\\n ];\\n server.create(\\"review\\", {\\n content: \\"Amazing book!\\",\\n user: users[2],\\n book: books[0],\\n });\\n server.create(\\"review\\", {\\n content: \\"Nice read!\\",\\n user: users[2],\\n book: books[1],\\n });\\n },\\n })\\n}\\n\\n
The above seed hook generates three users (admin
, editor
, reader
), a single category (Fiction
), and two authors (J.K. Rowling
and P.J. Jones
). It then creates two books (Harry Potter
and Fantastic Beasts
), linking them to their respective authors and category. Finally, it adds two reviews, both assigned to the reader
user and linked to their respective books.
So far, we’ve architected the structure of our models and their relationships. At this point, if we run our frontend application, we’ll have the following error:
\\nThis is because we haven’t defined routes for the endpoints that the app is trying to access.
\\nWe can mock API endpoints using the routes()
hook to define our route handlers. Update the server with the following:
export function makeServer() {\\n createServer({\\n routes() {\\n this.namespace = \\"api\\";\\n // Fetch Books\\n this.get(\\"/books\\", (schema) => {\\n return schema.books.all();\\n });\\n }\\n })\\n}\\n\\n
The this.get()
method lets us mock out GET requests. The first argument is the URL we’re handling (/books
) and the second argument is a function that handles the data manipulation logic and responds to our app with some data. The namespace
appends /api
to all the endpoint URLs like /api/books
.
This route handles GET /api/books/:id
requests by retrieving a book from Mirage’s database using the provided id
(a dynamic segment in our URL):
export function makeServer() {\\n createServer({\\n routes() {\\n ...\\n this.get(\\"/books/:id\\", (schema, request) => {\\n let book = schema.books.find(request.params.id);\\n return book ? book : new Response(404, {}, { error: \\"Book not found\\" });\\n });\\n }\\n })\\n}\\n\\n
This route handler defines a GET /categories
API endpoint in Mirage. It retrieves and returns all category records stored in Mirage’s mock database:
export function makeServer() {\\n createServer({\\n routes() {\\n ...\\n this.get(\\"/categories\\", (schema) => {\\n return schema.categories.all();\\n });\\n }\\n })\\n}\\n\\n
This route handler defines a GET /authors
API endpoint in Mirage. It retrieves and returns all author records stored in Mirage’s mock database when a request is made to this endpoint:
export function makeServer() {\\n createServer({\\n routes() {\\n ...\\n this.get(\\"/authors\\", (schema) => {\\n return schema.authors.all();\\n });\\n }\\n })\\n}\\n\\n
This endpoint retrieves all books written by a specific author, based on the author’s ID:
\\nexport function makeServer() {\\n createServer({\\n routes() {\\n ...\\n this.get(\\"/authors/:id/books\\", (schema, request) => {\\n let authorId = request.params.id;\\n let author = schema.authors.find(authorId);\\n return author.books;\\n });\\n }\\n })\\n}\\n\\n
When a GET
request is made to this route, the id
parameter is extracted from the URL (request.params.id
). The ID is used to find the corresponding author from Mirage’s database. Finally, it returns the list of books associated with that author.
Mocking JWT authentication endpoints is usually tricky because the jsonwebtoken
package, a popular library for working with JWTs, only supports Node.js and doesn’t work on the browser. In this section, we’ll explore a simple trick to effectively mock JWT authentication endpoints.
This endpoint handles user authentication by checking the provided email and password against stored user data:
\\nexport function makeServer() {\\n createServer({\\n routes() {\\n ...\\n this.post(\\"/login\\", (schema, request) => {\\n let { email, password } = JSON.parse(request.requestBody);\\n let user = schema.users.findBy({ email });\\n\\n if (!user || user.password !== password) {\\n return new Response(401, {}, { message: \\"Invalid credentials\\" });\\n }\\n\\n let token = \\"valid-token\\";\\n\\n return {\\n token,\\n user: { id: user.id, email: user.email, role: user.role },\\n };\\n });\\n }\\n })\\n}\\n\\n
The endpoint first extracts the credentials from the request body and searches for a matching user in the database. If no user is found or the password is incorrect, it returns a 401 Unauthorized
response with an error message. If the credentials are valid, it generates a mock JWT (\\"valid-token\\"
) and returns it with the user’s ID, email, and role.
This route handler retrieves a user’s profile based on their ID while enforcing JWT authentication:
\\nexport function makeServer() {\\n createServer({\\n routes() {\\n ...\\n this.get(\\"/user-profile/:id\\", (schema, request) => {\\n let id = request.params.id;\\n let authHeader = request.requestHeaders.Authorization;\\n\\n if (!authHeader)\\n return new Response(401, {}, { message: \\"No token provided\\" });\\n try {\\n let user = schema.users.find(id);\\n return user ? user : new Response(404, { error: \\"User not found\\" });\\n } catch (error) {\\n return new Response(403, error, { message: \\"Invalid token\\" });\\n }\\n });\\n }\\n })\\n}\\n\\n
It first extracts the id
from the request parameters and checks for an Authorization
header. If no token is provided, it returns a 401 Unauthorized
response. Then, it attempts to find the user in Mirage’s database using schema.users.find(id)
. If the user exists, it returns the user data. Otherwise, it returns a 404 Not Found
error. If any error occurs (such as an invalid token), it responds with a 403 Forbidden
status.
You can apply the same logic for protected API routes requiring authentication.
\\nHere is how the user-profile
endpoint is accessed:
useEffect(() => {\\n const token = localStorage.getItem(\\"token\\")\\n fetch(`/api/user-profile/${userId}`,{\\n method: \\"GET\\",\\n headers: {\\n \\"Content-Type\\": \\"application/json\\",\\n Authorization: token, // Attach token here\\n },\\n })\\n .then((res) => res.json())\\n .then((data) => {\\n setProfile(data.user)\\n })\\n .catch((error) => {\\n console.error(error);\\n });\\n}, [userId]);\\n\\n
First, we retrieve the JWT token stored in localStorage
after the user logs in and sends a GET
request to /api/user-profile/${userId}
with the token in the Authorization
header.
Mirage JS also makes it easy to mock role-based access API endpoints, allowing you to simulate different user roles and permissions.
\\nThis route handles POST /api/books/:id/review
, allowing only users with the user
role to submit reviews for a specific book:
export function makeServer() {\\n createServer({\\n routes() {\\n ...\\n this.post(\\"/books/:id/review\\", (schema, request) => {\\n let { user, content, userId } = JSON.parse(request.requestBody);\\n if (user.role !== \\"user\\") {\\n return new Response(\\n 403,\\n {},\\n { error: \\"Only users can review books\\" }\\n );\\n }\\n let book = schema.books.find(request.params.id);\\n return book\\n ? schema.reviews.create({ content, userId, book })\\n : new Response(404, {}, { error: \\"Book not found\\" });\\n });\\n }\\n })\\n}\\n\\n
It checks if the user has the user
role; otherwise, it returns a 403 Forbidden
error. If the book exists, it creates a new review linked to the book and user; otherwise, it returns a 404 Not Found
error.
This route handles POST /authors
requests, allowing only users with the admin
role to add authors:
export function makeServer() {\\n createServer({\\n routes() {\\n ...\\n this.post(\\"/authors\\", (schema, request) => {\\n console.log(request.requestBody);\\n let author = JSON.parse(request.requestBody);\\n if (author.user.role !== \\"admin\\") {\\n return new Response(403, {}, { error: \\"Permission denied\\" });\\n }\\n return schema.authors.create(author);\\n });\\n }\\n })\\n}\\n\\n
The endpoint checks if the user.role
is admin
. If the user is not, it returns a 403 Forbidden
response with an error message Permission denied
. Otherwise, it creates a new author entry in the Mirage JS database.
This route handles POST /categories
requests allowing only users with the admin
role to add new categories:
export function makeServer() {\\n createServer({\\n routes() {\\n ...\\n this.post(\\"/categories\\", (schema, request) => {\\n let category = JSON.parse(request.requestBody);\\n if (category.user.role !== \\"admin\\") {\\n return new Response(403, {}, { error: \\"Permission denied\\" });\\n }\\n return schema.categories.create(category);\\n }); \\n }\\n })\\n}\\n\\n
The endpoint checks if the user.role
is admin
. If the user is not, it returns a 403 Forbidden
response with an error message. Otherwise, it creates a new category entry in the Mirage database.
This route handles a POST /books
request, allowing users with the admin
role to add a new book:
export function makeServer() {\\n createServer({\\n routes() {\\n ...\\n this.post(\\"/books\\", (schema, request) => {\\n let book = JSON.parse(request.requestBody);\\n if (book.user.role !== \\"admin\\") {\\n return new Response(403, {}, { error: \\"Permission denied\\" });\\n }\\n return schema.books.create(book);\\n });\\n }\\n })\\n}\\n\\n
The endpoint first parses the request body to extract the book data. If the user’s role is not admin
, it returns a 403 Forbidden
response with an error message. Otherwise, it creates and stores the book in Mirage’s database.
Now, run the app and everything should be working as expected!
\\nIn this tutorial, we explored mocking complex relationship APIs using Mirage JS, mocking JWT authentication, and how to use Mirage JS factories to mock multiple server states. We also covered mocking role-based access API endpoints that allow you to simulate different user roles and permissions.
\\nWith this tutorial, you can build MVPs of any complexity without a backend. The most exciting experience is that whenever your backend API is stable, all you have to do is remove the mock server from your app and replace the API URLs with that of your backend API.
\\nIf you encounter any issues while following this tutorial or need expert help with web/mobile development, don’t hesitate to reach out on LinkedIn. I’d love to connect and am always happy to assist!
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nnet/http
and context
\\n Looking for the top Go frameworks for the web? You came to the right place. Go is a multiparadigm, statically typed, and compiled programming language designed by Google. It is similar to C, so if you’re a fan of C, Go will be an easy language to pick up.
\\nMany developers have embraced Go because of its garbage collection, memory safety, and structural typing system. This article will explore the top eight Go web frameworks, evaluating their features, advantages, and potential drawbacks to determine which will best suit your project.
\\nEditor’s note: This article was updated by Jude Miracle in April 2025 to include information on new, emerging Go frameworks (FastHTTP, Gorilla, Chi, Hertz) and remove commentary on more outdated frameworks (Iris, Revel).
\\nHere’s a quick summary of the next Go web frameworks we’ll review in this post:
\\nFramework | \\nPerformance (RPS/Memory) | \\nBest Use Cases | \\nEase of Use & Adoption | \\nGitHub Stars | \\nRecent Commits | \\nCommunity Engagement | \\n
---|---|---|---|---|---|---|
Gin | \\nHigh (excellent RPS, low memory) | \\nAPIs, microservices, lightweight web apps | \\nEasy to learn, large community | \\n81k+ | \\nVery active | \\nLarge, active community; extensive documentation | \\n
Fiber | \\nVery high (superior RPS, low memory) | \\nAPIs, microservices, high-performance web apps | \\nExpress.js-like syntax, easy migration, growing community | \\n35k+ | \\nActive | \\nMature; quick adoption, good documentation | \\n
Echo | \\nHigh (excellent RPS, low memory) | \\nAPIs, microservices, performance-critical applications | \\nMinimalist, clean API, good documentation, growing community | \\n30k+ | \\nActive | \\nMature; good support, focused on performance | \\n
Beego | \\nModerate (full-stack) | \\nFull-stack web apps, APIs, enterprise applications | \\nFeature-rich, ORM, caching, larger codebase, mature community | \\n31k+ | \\nModerately active | \\nMature; established community, comprehensive features | \\n
FastHTTP | \\nExtremely high (raw performance focus) | \\nHigh-performance APIs, custom web servers | \\nLow-level, requires more control, less abstraction | \\n22k+ | \\nActive | \\nGood; performance-focused, less abstraction | \\n
Gorilla | \\nModerate (modular, flexible) | \\nAPIs, web sockets, complex routing, general web development | \\nModular, powerful, mature, good documentation, very widely used components. | \\n21k+ | \\nModerately active | \\nMature, large, very commonly used libraries, but not a full framework in the same way as the others listed | \\n
Chi | \\nHigh (lightweight router) | \\nAPIs, microservices, modular routing | \\nSimple, composable router, easy integration | \\n19k+ | \\nActive | \\nGood; widely used router, plugin ecosystem | \\n
Hertz | \\nExtremely high (optimized for microservices) | \\nMicroservices, high-throughput APIs, ByteDance infrastructure | \\nOptimized for performance, specialized use case, growing adoption | \\n6k+ | \\nVery active | \\nGrowing; ByteDance backing, performance-focused | \\n
Before reviewing the top Go frameworks more deeply, let’s understand what Go is truly used for. Aside from building general web applications, the language’s scope encompasses a wide range of use cases:
\\nGo web frameworks were created to ease Go web development processes without worrying about setups and focusing more on the functionalities of a project.
\\nUsing Go without a framework is possible. However, it is much more tedious, and developers must constantly rewrite code. This is where the web frameworks come in.
\\nWith frameworks, for example, instead of writing a wrapper around a database connection in every new project, developers can pick a favorite framework and focus more on the business logic.
\\nNow, let’s review a few of the features that make Go popular.
\\nStatic typing provides better performance at runtime because it’s mostly used to build high-performance applications that are highly optimized at compile times.
\\nStatic typing also finds hidden problems like type errors. For example, if I were creating an integer variable, the compiler would recognize its type as an integer and only accept integer values. This makes it easier to manage code for larger projects.
\\nMany developers have created production-ready packages on top of the standard Go packages. These packages often become the standard libraries for specific features. For example, Gorilla Mux was created for routing by the community because the initial Go router is quite limited.
\\nAll Go-related packages are available on GitHub, including MongoDB, Redis, and MySQL.
The development time for these Go frameworks is fast and simple. Packages are already available and can be imported easily, eliminating the need to write redundant code.
\\nGo’s goroutines provide language-level support for concurrency, lightweight threads, strict rules for avoiding mutation to disallow race conditions, and overall simplicity.
\\nCompared to languages like Java, JavaScript, etc., Go has a relatively young ecosystem, so you may have fewer libraries to work with. Or, you may need to implement some functionalities from scratch, depending on what you’re building.
\\nGo’s minimalistic design philosophy may seem limiting if you’re used to other languages. The missing features may be complementary to your project, and Go’s standard library is limited, so you might have to rely on third-party packages for functionalities that are readily available in other languages.
\\n\\nIn the following sections, we’ll explore the top eight Go frameworks to see what they each offer.
\\nGin is an HTTP web framework written in Go that is very popular, with over 81k stars on GitHub at the time of writing. Currently, Gin is the most popular framework for building microservices because it offers a simple way to build a request-handling pipeline where you can plug in middleware.
\\nGin also boasts a Martini-like API and, according to Gin’s GitHub page, is 40x faster because of httprouter. Below are some of its amazing features.
\\nGin offers convenient error management. This means that when encountering any errors during an HTTP request, Gin documents the errors as they occur:
\\nc.AbortWithStatusJSON(400, gin.H{\\n \\"error\\": \\"Blah blahhh\\"\\n})\\n\\n// continue\\nc.JSON(200, gin.H{\\n \\"msg\\": \\"ok\\"\\n})\\n\\n
Gin also makes it incredibly easy to create middleware, which can be plugged into the request pipeline by creating a router with r := gin.New()
and adding a logger middleware with r.Use(gin.Logger())
.
You can also use a recovery middleware with r.Use(gin.Recovery())
.
Gin’s performance is thanks to its route grouping and small memory. Gin’s grouping ability for routes lets them nest infinitely without affecting performance.
\\nIts fast performance is also thanks to its small memory, which Gin uses or references while running. The more memory usage the server consumes, the slower it gets. And because Gin has a low memory footprint, it provides faster performance.
\\nFinally, Gin provides support for JSON validation. Using JSON to send requests can validate required values, like input data from the client. These values must be validated before saving in memory, so by validating them, developers can avoid saving inaccurate values.
\\nGin is a simple, easy-to-use framework. This makes it the ideal framework for those just starting with Go, because it is minimal and straightforward to use.
Check out this quickstart Gin tutorial for more information.
\\nEcho is another promising framework created by Labstack with nearly 30k stars on GitHub. Echo is also regarded as a micro framework, which is more of a standard library and a router, and has fully baked documentation for developers to follow.
\\nThis framework is great for people who want to learn how to create APIs from scratch, thanks to its extensive documentation.
\\nEcho lets developers define their own middleware and also has built-in middleware to use. This gives developers the ability to create custom middleware to get specific functionalities while having the built-in middleware speed up production.
\\nEcho also supports HTTP/2 for faster performance and an overall better user experience. Its API also supports a variety of HTTP responses like JSON, XML, stream, blob, file, attachment, inline, and customized central HTTP error handling.
\\nFinally, Echo supports a variety of templating engines, providing the flexibility and convenience developers need when choosing an engine.
\\nFiber is another Express.js-like web framework written in Go that boasts low memory usage and rich routing. Built on top of the fasthttp HTTP engine for Go, which is the fastest HTTP engine for Go, Fiber is one of the fastest Go frameworks.
\\nCreated with the main focus of minimalism and the Unix philosophy to provide simple and modular software technology, the idea for Fiber was to allow new Go developers to begin creating web applications quickly.
\\nFiber boasts a built-in rate limiter that helps reduce traffic to a particular endpoint. This is helpful if, for example, a user tries to sign in to an account continuously and knows that it might be malicious activity.
\\nIts static files — style sheets, scripts, and images — can be handled and served from the server. This means they can be easily cached while consuming less memory. The content remains static upon every request.
\\nFiber’s support for WebSocket bidirectional TCP connections is useful for creating real-time communications, like a chat system.
\\nLike the other Go frameworks we’ve mentioned in this post, Fiber has versatile middleware support, supports a variety of template engines, has low memory usage and footprint, and provides great documentation that is easy for new users to follow.
\\nBeego is another Go web framework that is mostly used to build enterprise web applications with rapid development.
\\nBeego has four main parts that make it a viable Go framework:
\\nlog
, config
, and governor
Below are some of the features that Beego offers.
\\nBecause Beego focuses on enterprise applications, which tend to be very large with a lot of code powering many features, a modular structure arranges modules for specific use cases, optimizing performance.
\\nThe modular structure of the Beego framework supports features like a configuration module, logging module, and caching module.
\\nBeego also uses a regular MVC architecture to handle specific development aspects in an app, which is also beneficial for enterprise applications.
\\nBeego also supports namespace routing, which defines where the Controller
is located for a Route
. Here is an example:
func init() {\\n\\nns :=\\n beego.NewNamespace(\\"/v1\\",\\n beego.NSRouter(\\"/auth\\", &controllers.AuthController{}),\\n beego.NSRouter(\\"/scheduler/task\\",&controllers.TaskController{}),\\n )\\n\\n beego.AddNamespace(ns) \\n}\\n\\n
Beego’s automated API documentation through Swagger provides developers with the automation they need to create API documentation without wasting time manually creating it.
\\nRoute annotation lets developers define any component for a route target for a given URL. This means routes do not need to be registered in the route file again; only the controller should use Include
.
With the following route annotation, Beego parses and turns them into routes automatically:
\\n// Weather API\\ntype WeatherController struct {\\n web.Controller\\n}\\n\\nfunc (c *WeatherController) URLMapping() {\\n c.Mapping(\\"StaticBlock\\", c.StaticBlock)\\n c.Mapping(\\"AllBlock\\", c.AllBlock)\\n}\\n\\n// @router /staticblock/:key [get]\\nfunc (this *WeatherController) StaticBlock() {\\n}\\n\\n// @router /all/:key [get]\\nfunc (this *WeatherController) AllBlock() {\\n}\\n\\n
Then, register the Controller
:
web.Include(&WeatherController{})\\n\\n
FastHTTP, as the name suggests, is a very fast HTTP framework for Go. It focuses on high performance and efficiency. Unlike many other Go web frameworks, FastHTTP does not use the standard net/http
package. Instead, it builds its own HTTP server and client from the ground up, which is optimized for speed and low memory use. As of now, FastHTTP is quite popular, with over 22k stars on GitHub.
FastHTTP works well for situations that need high speed and low delay, such as real-time APIs, microservices, and web applications with high user traffic. It can handle over 100,000 requests per second, manage over 1 million active connections at once, and work with different types of data like JSON, XML, and form-data. Here are some of its key features.
\\nFastHTTP is designed for speed. It avoids using the standard net/http
package and improves every part of handling HTTP requests. As a result, FastHTTP offers much higher speed and lower delays compared to many other frameworks. Tests show that FastHTTP consistently outperforms other Go web frameworks, particularly when there are many requests at the same time.
Here’s an example of setting up a basic FastHTTP server:
\\npackage main\\n\\nimport (\\n \\"github.com/valyala/fasthttp\\"\\n)\\n\\nfunc main() {\\n requestHandler := func(ctx *fasthttp.RequestCtx) {\\n ctx.WriteString(\\"Hello, FastHTTP!\\")\\n }\\n\\n fasthttp.ListenAndServe(\\":8080\\", requestHandler)\\n}\\n\\n
In addition to its server capabilities, FastHTTP has a fast and efficient HTTP client that works well for making requests. This client is designed for speed, making it a great option for applications where performance is important.
\\nHere’s an example of using the FastHTTP client:
\\nfunc main() {\\n client := &fasthttp.Client{}\\n req := fasthttp.AcquireRequest()\\n resp := fasthttp.AcquireResponse()\\n\\n defer fasthttp.ReleaseRequest(req)\\n defer fasthttp.ReleaseResponse(resp)\\n\\n req.SetRequestURI(\\"http://example.com\\")\\n if err := client.Do(req, resp); err != nil {\\n panic(err)\\n }\\n\\n println(\\"Response status:\\", resp.StatusCode())\\n println(\\"Response body:\\", string(resp.Body()))\\n}\\n\\n
FastHTTP uses a special object called RequestCtx
to manage HTTP requests and responses. This object is reused for different requests, which helps reduce memory use and the need for garbage collection. This design leads to FastHTTP’s great performance.
Here’s an example of how to handle different HTTP methods and read request data:
\\nrequestHandler := func(ctx *fasthttp.RequestCtx) {\\n switch string(ctx.Path()) {\\n case \\"/hello\\":\\n ctx.WriteString(\\"Hello, FastHTTP!\\")\\n case \\"/user\\":\\n if ctx.IsPost() {\\n name := ctx.FormValue(\\"name\\")\\n ctx.WriteString(\\"Hello, \\" + string(name))\\n } else {\\n ctx.Error(\\"Method not allowed\\", fasthttp.StatusMethodNotAllowed)\\n }\\n default:\\n ctx.Error(\\"Not found\\", fasthttp.StatusNotFound)\\n }\\n}\\n\\n
To get started with this high-performance framework, explore its documentation and visit its GitHub repo.
\\nGorilla is not exactly a web framework; it’s a set of modular packages that help developers build web applications in Go. It is known for being flexible and simple, making it popular among developers who like to create their own toolkit instead of using a strict framework. As of now, the Gorilla toolkit has gained a lot of popularity, with its individual packages receiving thousands of stars on GitHub.
Gorilla’s modular approach lets developers choose the components they need, making it adaptable for various use cases. Below are some of its standout packages and features.
\\nGorilla Mux is a popular package in the Gorilla toolkit. It offers a strong and flexible router for building HTTP services. Unlike the default Go router, Gorilla Mux supports advanced routing features like route parameters, query parameters, and HTTP method-based routing.
\\nHere’s a simple example of how to set up a basic Gorilla Mux router:
\\npackage main\\n\\nimport (\\n \\"net/http\\"\\n \\"github.com/gorilla/mux\\"\\n)\\n\\nfunc main() {\\n r := mux.NewRouter()\\n r.HandleFunc(\\"/\\", func(w http.ResponseWriter, r *http.Request) {\\n w.Write([]byte(\\"Hello, Gorilla Mux!\\"))\\n })\\n\\n http.ListenAndServe(\\":8080\\", r)\\n}\\n\\n
Gorilla Sessions helps you manage user sessions easily in your web application. It works with different types of session storage, like cookies and server-side options such as Redis or MySQL. This package is great for setting up user authentication and storing user data. Here’s an example of how to use Gorilla Sessions:
\\npackage main\\n\\nimport (\\n \\"net/http\\"\\n \\"github.com/gorilla/sessions\\"\\n)\\n\\nvar store = sessions.NewCookieStore([]byte(\\"super-secret-key\\"))\\n\\nfunc main() {\\n http.HandleFunc(\\"/\\", func(w http.ResponseWriter, r *http.Request) {\\n session, _ := store.Get(r, \\"session-name\\")\\n session.Values[\\"foo\\"] = \\"bar\\"\\n session.Save(r, w)\\n w.Write([]byte(\\"Session saved!\\"))\\n })\\n\\n http.ListenAndServe(\\":8080\\", nil)\\n}\\n\\n
Gorilla Websocket is a popular tool for using WebSocket communication in Go. It offers a simple and user-friendly way to build real-time applications, such as chat servers, live notifications, and collaborative tools. Here’s how to set up a WebSocket server using Gorilla Websocket:
\\npackage main\\n\\nimport (\\n \\"net/http\\"\\n \\"github.com/gorilla/websocket\\"\\n)\\n\\nvar upgrader = websocket.Upgrader{\\n CheckOrigin: func(r *http.Request) bool {\\n return true\\n },\\n}\\n\\nfunc main() {\\n http.HandleFunc(\\"/ws\\", func(w http.ResponseWriter, r *http.Request) {\\n conn, _ := upgrader.Upgrade(w, r, nil)\\n defer conn.Close()\\n\\n for {\\n messageType, p, err := conn.ReadMessage()\\n if err != nil {\\n return\\n }\\n conn.WriteMessage(messageType, p)\\n }\\n })\\n\\n http.ListenAndServe(\\":8080\\", nil)\\n}\\n\\n
Gorilla uses Gorilla Schema to make it easy to convert form data into Go structs. It is useful for managing HTML form submissions and checking user input for correctness. Here’s an example of how to use Gorilla Schema:
\\npackage main\\n\\nimport (\\n \\"net/http\\"\\n \\"github.com/gorilla/schema\\"\\n)\\n\\ntype User struct {\\n Name string `schema:\\"name\\"`\\n Email string `schema:\\"email\\"`\\n}\\n\\nfunc main() {\\n http.HandleFunc(\\"/\\", func(w http.ResponseWriter, r *http.Request) {\\n if r.Method == \\"POST\\" {\\n r.ParseForm()\\n var user User\\n decoder := schema.NewDecoder()\\n decoder.Decode(&user, r.PostForm)\\n w.Write([]byte(\\"User: \\" + user.Name + \\", Email: \\" + user.Email))\\n } else {\\n w.Write([]byte(\\"Submit a form!\\"))\\n }\\n })\\n\\n http.ListenAndServe(\\":8080\\", nil)\\n}\\n\\n
Chi is a lightweight and easy-to-use router for building HTTP services in Go. Its simplicity and flexibility make it a popular choice for Go developers creating RESTful APIs and web applications. As of now, Chi has over 19k stars on GitHub, showing its wide use and strong community support.
Chi has a modular design that helps developers create robust and scalable applications without unnecessary complexity. It offers a clear and straightforward API for routing and middleware, making it suitable for both beginners and experienced developers. Here are some of its key features.
\\nChi is a lightweight and fast framework that has minimal overhead. It does not include built-in features like templating or database management, which helps keep it small and focused. Instead, Chi encourages developers to use external libraries for extra features. This way, your application only includes what it needs.
\\nHere’s an example of defining routes with parameters and nested routes:
\\nr := chi.NewRouter()\\n\\nr.Route(\\"/users\\", func(r chi.Router) {\\n r.Get(\\"/\\", listUsers) // GET /users\\n r.Post(\\"/\\", createUser) // POST /users\\n\\n r.Route(\\"/{userID}\\", func(r chi.Router) {\\n r.Get(\\"/\\", getUser) // GET /users/{userID}\\n r.Put(\\"/\\", updateUser) // PUT /users/{userID}\\n r.Delete(\\"/\\", deleteUser) // DELETE /users/{userID}\\n })\\n})\\n\\n
Chi has a strong middleware system that helps developers easily add features like logging, authentication, and error handling. In Chi, middleware can be combined, allowing you to use it globally, on specific routes, or for groups of routes. Chi also provides built-in middleware for common tasks.
\\nHere’s an example of using middleware in Chi:
\\nr := chi.NewRouter()\\n\\n// Global middleware\\nr.Use(middleware.Logger)\\nr.Use(middleware.Recoverer)\\n\\n// Route-specific middleware\\nr.With(middleware.BasicAuth(\\"admin\\", \\"password\\")).Get(\\"/admin\\", adminHandler)\\n\\n
Chi also uses Go’s context package to share information and manage data related to requests. This makes it easy to pass details like user authentication or request IDs between middleware and handlers. Chi also makes it simple to chain middleware together, allowing developers to create more complex ways to handle requests.
\\nnet/http
and context
Chi is a framework that builds on Go’s standard net/http
package and uses the context package effectively. This makes it easy to use with other Go libraries and tools. Unlike some frameworks that create their own systems, Chi works directly with http.Handler
and http.HandlerFunc
. This simplicity makes it easy to integrate with other Go code.
Chi also uses context.Context
for values related to requests, as well as for timeouts and cancellations. This fits well with Go’s approach to handling multiple tasks at the same time.
For more information, check out the Chi GitHub repository and explore its documentation to get started with this powerful framework today!
\\nHertz is a high-performance, extensible HTTP web framework designed for building efficient and scalable web applications in Go. Developed by CloudWeGo, Hertz is optimized for modern cloud-native environments and has quickly gained traction in the Go community for its speed, flexibility, and developer-friendly features. At the time of writing, Hertz has over 6k stars on GitHub and active contributors.
Hertz is built to handle high-concurrency scenarios, making it an excellent choice for microservices, APIs, and real-time applications. It combines the simplicity of Go with advanced features like high-performance routing, middleware support, and seamless integration with other CloudWeGo ecosystem tools. Below are some of its standout features.
\\nHertz is built on top of Netpoll, a high-performance I/O library that significantly reduces latency and improves throughput. Its routing engine can manage thousands of requests each second with little extra cost, making it perfect for applications that need to handle many users at once. Hertz allows for organized coding with its support for parameterized routing and route grouping.
\\nHere’s an example of defining routes in Hertz:
\\nh := server.Default()\\n\\nh.GET(\\"/hello\\", func(c context.Context, ctx *app.RequestContext) {\\n ctx.String(200, \\"Hello, Hertz!\\")\\n})\\n\\nh.POST(\\"/users\\", func(c context.Context, ctx *app.RequestContext) {\\n // Handle user creation\\n})\\n\\nh.GET(\\"/users/:id\\", func(c context.Context, ctx *app.RequestContext) {\\n userId := ctx.Param(\\"id\\")\\n // Fetch user by ID\\n})\\n\\n
Hertz offers a strong middleware system that helps developers extend the framework’s features. You can use middleware for tasks like logging, authentication, rate limiting, and error handling. Hertz also provides built-in middleware for common tasks, including recovery and CORS.
\\nHere’s an example of adding middleware to a Hertz application:
\\nh := server.Default()\\n\\n// Custom middleware\\nh.Use(func(c context.Context, ctx *app.RequestContext) {\\n fmt.Println(\\"Request received\\")\\n ctx.Next(c)\\n})\\n\\n// Built-in recovery middleware\\nh.Use(recovery.Recovery())\\n\\n
The Hertz framework supports HTTP 1.1 and the ALPN protocol right out of the box. Its layered design also allows for custom protocol implementations, so it can adapt to different needs.
\\nHertz is easy to extend, allowing developers to add third-party libraries and tools without trouble. It is part of the CloudWeGo ecosystem, which also includes other high-performance tools like Kitex, a Go RPC framework, and Volo, a Rust RPC framework. This makes Hertz a great option for building complete cloud-native applications.
\\nFor more information, check out the official Hertz GitHub repository and explore its documentation to get started with this powerful framework today!
\\nThe performance of a web framework is important for the scalability and responsiveness of web applications.
\\nThe smallnest/go-web-framework-benchmark repository offers helpful insights into how different popular Go web frameworks compare in terms of performance. By looking at the benchmark results, we can better understand the strengths and weaknesses of each framework based on different levels of concurrent users and processing tasks. The benchmarks show how well each framework works in various situations.
\\nWe ran the tests in four main scenarios:
\\nThis graph shows how different frameworks perform with various numbers of concurrent requests for a CPU-bound task. Here are the main points:
This graph looks specifically at CPU-bound tasks:
This graph presents performance with different levels of concurrent requests:
This graph examines how performance changes with different processing times:
\\nFastHTTP is usually the best performer. Gin and Echo provide a great balance of performance and user-friendliness.
\\nPerformance varies greatly depending on concurrency, processing, and some other metrics. For high-concurrency and performance-critical applications, consider FastHTTP. For a good balance of performance and ease of use, choose Gin or Echo.
\\nFor complex, enterprise-level applications, look at Beego. For lightweight, simple services, Chi may be best.
\\nThe Go web framework community is active on Reddit and GitHub. The most popular frameworks on GitHub are Gin and Echo. Gin has the most stars and forks, making it a top choice for fast APIs. Fiber is also becoming popular due to its Express.js-like syntax and good performance, although there are some worries about how well it works with Go’s standard library.
\\nChi and Gorilla are known for their simplicity. Chi focuses on middleware, while Gorilla provides a modular toolkit. Hertz, which is supported by ByteDance, is growing in use, especially for larger applications. FastHTTP is very fast but has compatibility issues.
\\nOn Reddit, there is a discussion on whether to use frameworks or minimal routers. Gin is recommended for beginners, while Fiber is popular with those moving from Node.js. Chi is also becoming a popular recommendation because of its simplicity.
\\nWhen choosing a framework, consider your project’s needs, such as performance, ease of use, features, and developer skills. The Go web framework ecosystem provides effective tools for building everything from microservices to complex systems.
\\nGin is a lightweight and fast web framework for building APIs quickly. Its simple routing makes it easy to create web services. Fiber, inspired by Express.js, offers speed and familiarity, making it great for WebSocket applications. It also takes advantage of Go’s efficiency with memory and concurrency. Echo is for developers who need type safety and support for middleware. It suits enterprise applications and API gateways, although it has a steeper learning curve. Beego provides a complete MVC architecture with built-in tools. It is suitable for large-scale applications that require complex software and session management.
\\nFastHTTP is a library focused on low-latency programming, perfect for high-performance needs like trading platforms. Gorilla gives developers control over modular web infrastructure with flexible routing and WebSocket support, but it requires more setup. Chi is a minimalist framework that uses Go’s standard library, making it efficient for microservices and REST APIs. Hertz has a cloud-native design and supports multiple protocols. It is a good choice for modern architecture.
\\nIn this article, we explored eight popular Go web frameworks that offer a variety of features and philosophies. This list isn’t definitive. Your ideal framework might not even be on the list, but that shouldn’t stop you from choosing the right framework for your project.
\\nMany of these frameworks share similar features and are inspired by others, but each has unique aspects that are suitable for different development challenges. I hope this helps you pick the right framework for your next Go project.
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nExcalibur.js is a game development engine written in plain JavaScript (or TS) to develop games. It’s a great starting point for developers already familiar with JavaScript development because it eliminates the need to set up special tools — you can use Node and npm with your favorite editor. Excalibur also eliminates the requirement of learning a new language like GDScript or C# — you can simply use JavaScript. The game runs directly in the browser, so you can run and test it in an environment you’re already familiar with.
\\nThis article assumes you have a good understanding of JavaScript and JavaScript tools like Node and npm. However, you don’t need to know game development or Excalibur.js. Game development is a vast field, and this article will only introduce the basics to get you started, as well as provide some helpful tips to write your first game. Like learning any other development, you’ll still need to make a lot of projects and read docs and other tutorials to improve!
\\nYou might be asking: Why another game development tutorial?
\\nThere are a lot of good tutorials for making games. Given the number of games that are there on Steam and the app store, this is by no means a niche domain. Additionally, most game engines have a “Getting started” guide in their official docs. So, why am I writing this article?
\\nIn my experience, a lot of game development tutorials have the same problem as beginner web development tutorials. They do a great job of covering the basics, introducing key concepts, and explaining them well. If you follow along, you’ll likely end up with a nice, working to-do game. But just like the dreaded “tutorial hell” of web dev, I often find myself stuck after finishing a game dev tutorial — caught in a limbo where I understand the fundamentals but have no clear idea of what to do next to build my own game, rather than just another to-do app clone.
\\nIt’s like when, after finishing a React tutorial, you know what hooks are, what props are, and maybe even something about state management. But you’re not sure how to make your own website using these concepts.
\\nSo, I’m writing this with the hopes that it will help you write your own game. The first section will show you the basics you need to understand to make any game in Excalibur.js. Then, in the second section, I’ll talk about a method of using five implementation questions that I find helpful while developing a game. You should be able to apply that to your concepts and make your game.
\\nLet’s get started!
\\nA game engine occupies the same space in game development that a JavaScript engine occupies in frontend development: it handles the main game loop of displaying things on screen, accepts inputs from users, runs code in response to them, and updates the overall state of the game.
\\nMost games need these functionalities in one form or another, so instead of everyone having to write their own loop implementations, all of these get abstracted in the engine, which can be used as a sort of library in the case of Excalibur.js.
\\nExcalibur is a strictly 2D game engine written in JavaScript. As mentioned earlier, the biggest advantage it offers web developers is familiarity, making it a great starting point for game development. That said, Excalibur has some downsides: building for desktop or console can be tricky, and it’s limited to 2D. If you’re looking to create something in 3D, you might want to check out a different game engine like Three.js.
\\n“Hello World” is a popular example used to validate that the compiler/interpreter of your programming language is correctly installed and functioning. We will do the same by setting up Excalibur.js and displaying a square on the screen as a game dev equivalent of a “Hello World” example.
\\nWe’ll use npx and the Excalibur CLI to set up a basic template. Run the following:
\\nnpx create-excalibur@latest\\n\\n
In the options, select the following:
\\nUsing Vite allows for an easier change-build-run cycle by using a single command.
\\n\\nThis will bootstrap a basic project, and you can run npm run start
to start the Vite server. On the localhost
, you will see the sample, where a sword moves on the screen in a square. Let’s change this to make a stationary square so we can understand the basics of creating an object in Excalibur.
Open the player.ts
file. This is where our “player,” i.e., the sword, is defined. There is a lot here, and you can look through and read the comments to get an idea of what functionality is possible. For our purposes, we will simply delete this and start from scratch.
Delete all the contents from the file and add the following:
\\nimport { Actor, Color, Engine, Rectangle, vec } from \\"excalibur\\";\\nexport class Player extends Actor{\\n constructor(){\\n super({\\n name: \\"Player\\",\\n pos:vec(250,250),\\n width:100,\\n height:100\\n })\\n }\\n onInitialize() {\\n this.graphics.add(new Rectangle({width:100,height:100,color:Color.Purple}));\\n this.on(\'pointerdown\',(e)=>{\\n console.log(\\"You clicked at point \\",e.worldPos.toString());\\n });\\n }\\n}\\n\\n
We start by importing the classes and functions from Excalibur. Then, we define our “player” class. As the rest of the demo code already uses the class name Player
, we’ll keep it the same, but you can change it if you also update the references in other places. The Actor
class is provided by Excalibur, and anything that can move, collide with other things, and react to a player’s input must extend this class.
We start our constructor by calling super()
, which sets up the basic functionality provided by the Actor
class, and should always be done. We provide it an object with information about our “player”:
x = 250 ; y = 250
100
, which are used in collision calculationsFor the position, remember that like most game engines, the X axis starts on the top-right and increases on the left, while the Y axis starts on the top-right and increases downwards.
\\nThe onInitialize
method is used for one-time initializations before the character is rendered on the screen. Here, we can load the sprite, set up animations, and set up listeners for signals. In our code, we’ll add a rectangle using the this.graphics.add
method.
this.graphics
is the built-in graphics context for Actor
classes and is used to draw them on the screen — either a simple shape or a sprite loaded from a sprite sheet, which we’ll see later. Here, we use a simple rectangle and provide the width, height, and color to the Rectangle
constructor.
We then set an event listener and listen for pointer click events. In the handler, we log the position of the mouse pointer at the click time. If you run it in the browser, you will see something like this:
\\nAs you can see, after clicking the Play game button, our rectangle fades in, and when we click on it, the click position is logged in the console. Next, we will see how we can make our character react to user input.
\\nNow that we know how to display a basic shape, we can move on to handling inputs from the player.
\\nThe most common way players will interact will be with a mouse and keyboard. We saw how to handle clicks in the previous section, so now let’s see how we can make our square move when the user presses an arrow button.
\\nThe Actor
base class provides an update
method, which is used by the engine to update the actor’s state. In this method, we can check if any key is pressed and then update our square’s position:
update(engine,elapsed) {\\n super.update(engine,elapsed);\\n}\\n\\n
N.B., We must call super.update
in our overriding because, otherwise, the core update implementation won’t be called, which can break our basic functionality.
Using the engine.input.keyboard.isHeld
function, we can check if a key is pressed. This returns true
if the given key is held down. Several other functions can be used to check the key down and release events, such as wasPressed
or wasReleased
.
We’ll check if the left arrow was held and update the square’s position accordingly:
\\n...\\nsuper.update(engine, elapsed);\\nif (engine.input.keyboard.isHeld(Keys.ArrowRight)) {\\n this.pos = this.pos.add(vec(3, 0));\\n}\\n\\n
If the key is held, we update the position by three pixels. An important thing to note is that this will be called on each frame render, so if your game is running at a high frame rate, this will be called more times than if it is running at a low frame rate. So, for example, if the game runs at 30fps, the player will see a 3 * 30 = 90px change in one second, whereas if the game runs at 60fps, the user will see a 3 * 60 = 180px change in one second.
\\nTo solve this issue, we can use the elapsed
parameter, which gives the milliseconds since the last call to update. We can use that and define the vector as vec(3*elapsed,0)
to make movement consistent irrespective of the frame rate.
If we replicate the same for the remaining three directions, we get a square that we can move using arrow keys:
\\n\\nIn a similar way, we can also handle the mouse or joystick inputs as well. For mouse events, we can set up the event listener for clicks in our onInitialize
methods as we did in the last section.
After handling player inputs, we will now see how levels work in Excalibur. In most games, you will need to switch from one level to the next, and in Excalibur, levels can be defined as Scenes
.
A Scene
is a collection of actors that are active together. This can include anything from start and end screens, individual levels, and even different stages within a level. The key thing to remember is that only one scene can be active at a time, and the engine only updates actors that belong to that scene. Anything outside the active scene won’t be rendered or updated.
Let’s take a look at the level.ts
file, which has the default scene we have been using so far:
export class MyLevel extends Scene {\\n override onInitialize(engine: Engine): void {\\n const player = new Player();\\n this.add(player); // Actors need to be added to a scene to be drawn\\n }\\n ...\\n\\n
After the imports, the MyLevel
class is defined, extending the built-in Scene
class. In the onInitialize
method, we create our square by instantiating the Player
class and then adding it to this scene.
Let’s finally take a look at the main.ts
file. This is meant mostly to instantiate the Engine
instance, add scenes to it, and start the game. Among other things, this is what we see:
...\\nscenes: {\\n start: MyLevel\\n},\\n...\\n\\n
We declare the scene with the name start
as MyLevel
. Then, in the start call, we’ll run this:
game.start(\'start\',\\n...\\n\\n
We provide the name of the first scene to run after the player clicks on the start button, along with the transition options for how we want to transition into that scene.
\\nLet’s rename the class Level1
and update the references as well. Then, let’s add a text label showing Level1
. In the onInitialize
method of the Level1
class, we add the following:
const title = new Label({\\n text: \\"LEVEL 1\\",\\n pos: vec(300,100),\\n font: new Font({\\n family: \'impact\',\\n size: 48,\\n unit: FontUnit.Px\\n })\\n})\\nthis.add(title);\\n\\n
We provide the constructor of the Label
class the text, position, and font to be used. We then add it to the level similar to the player object. Now, if you run the game, you will see that it has the label displayed. Next, we will see how we can change the scenes.
For our example, we will change the scene from Level1
, which we have been using until now, to a scene called Level2
. For now, the only difference between them will be the label, which will show the names of the respective levels. In an actual game, we would probably parameterize this and pass in the label text; but, for now, we will make a completely different class for Level2
.
We will also change the constructor of these levels to take a player instance and set the player from there. This way, both levels will share the same player instance:
\\nexport class Level1 extends Scene {\\n player: Player;\\n constructor(p: Player) {\\n super();\\n this.player = p;\\n }\\n ...\\n\\n
In the onInitialize
function, we’ll add the player as follows:
this.add(title);\\nthis.add(this.player);\\n\\n
In main.ts
, we construct the player instance, pass it the level constructor, and register them in the engine as follows:
let player = new Player(vec(100,250));\\n...\\nscenes: {\\n level1: new Level1(player),\\n level2: new Level2(player),\\n}\\n...\\n\\n
Excalibur triggers an event when an object leaves the viewport, and we will use that in the Player
’s onInitialize
function to trigger the scene change:
...\\nthis.on(\\"exitviewport\\", () => {\\n let next = engine.currentSceneName == \\"level1\\" ? \\"level2\\" : \\"level1\\";\\n engine.goToScene(next, {\\n destinationIn: new FadeInOut({\\n duration: 2000,\\n direction: \\"in\\",\\n color: Color.Black,\\n }),\\n });\\n});\\n...\\n\\n
We check the current level name and select the other level as the next scene. We also provide a FadeIn
transition similar to the game.start
call in main.ts
so the change of scene has a smooth transition.
Finally, to get a wrap-around behavior where if the player exists from the left, it appears on the right in the next level, we add the following in both levels’ onActivate
function:
onActivate(context: SceneActivationContext<unknown>): void {\\n this.player.pos.x %= this.engine.screen.width\\n if (this.player.pos.x < 0) {\\n this.player.pos.x = this.engine.screen.width;\\n }\\n this.player.pos.y %= this.engine.screen.height;\\n if (this.player.pos.y < 0) {\\n this.player.pos.y = this.engine.screen.height;\\n }\\n}\\n\\n
onActivate
is called each time a scene is made active, so we’ll update the position correctly when we transition from one scene to another. If you run this, you will get a smooth transition when the player character exits the viewport:
In most games, you will need to detect some kind of collision in order to perform certain actions:
\\nIn Excalibur, when collision detection is enabled on something, it will emit a collision event and call the onCollision
method when it collides with something. For our example, we will add two squares, and each will take the player to another level when the player collides with them. We will create a Level3
like the above and add it to the engine:
...\\nscenes: {\\n level1: new Level1(player),\\n level2: new Level2(player),\\n level3: new Level3(player),\\n},\\n...\\n\\n
Create another class called LevelSelector
as follows:
export class LevelSelector extends Actor {\\n next: string;\\n label: string;\\n engine: Engine;\\n...\\n}\\n\\n
This class has three members: next
stores the level to go to, label
stores the display text, and engine
stores the reference to the engine.
Then we have the constructor:
\\nconstructor(levelName: string, label: string, pos: Vector, engine: Engine) {\\n super({\\n name: \\"LevelSelector\\",\\n pos: pos,\\n width: 100,\\n height: 100,\\n });\\n this.next = levelName;\\n this.label = label;\\n this.engine = engine;\\n}\\n\\n
In the onInitialize
method, we’ll create the square and label to display the following:
onInitialize(engine: Engine): void {\\n const square = new Rectangle({\\n width: 50,\\n height: 50,\\n color: Color.Magenta,\\n });\\n const title = new Text({\\n text: this.label,\\n font: new Font({\\n family: \\"impact\\",\\n size: 12,\\n unit: FontUnit.Px,\\n }),\\n });\\n let group = new GraphicsGroup({\\n members: [{\\n graphic: square,\\n offset: vec(0, 0),\\n },\\n {\\n graphic: title,\\n offset: vec(0, 60),\\n }]});\\n this.graphics.add(group);\\n}\\n\\n
Here, instead of using a single graphic like Rectangle
or Text
, we use GraphicsGroup
to display both of them. We also assign an offset to the text to make it appear below the square.
Finally, we add the onCollisionStart
method as follows:
onCollisionStart(self: Collider,other: Collider,\\n side: Side, contact: CollisionContact ): void {\\n if (other.owner instanceof Player) {\\n this.engine.goToScene(this.next);\\n }\\n }\\n\\n
We get self
as a reference to the object on which the method is called and other
as the other body that is colliding with it. These are both instances of the Collider
class and also store other information about the collision. The actual Actor
that collided is stored in the .owner
field. We check if the other.owner
is Player
, and change the level to the next one.
After making the appropriate changes in Level2
and Level3
to always start the Player
at a fixed position instead of wrap-around, we get our desired behavior:
If we touch the level 2 selector, we’ll change to level 2, and if we touch the level 3 selector, we change to level 3. Read more about collision detection and the types of collisions available in Excalibur here.
\\nSprites are images or animations used in games to represent characters, objects, and other elements. Most of the time, we don’t use simple squares or shapes for characters — we use more detailed images. A sprite sheet is a single, large image that contains multiple smaller sprites arranged in a grid. This makes it easier to manage sprites, as we can use one image instead of handling multiple separate files.
\\nWe will be using a pre-made sprite pack downloadable from itch.io. When using pre-made sprites, always ensure you have the appropriate license to use them for your purposes. Some assets are free to use for any game, some are free for personal use but not for commercial use, and some require purchasing a license for any use. Make sure you have the correct permission to use the assets.
\\nFor our example, you can download the asset pack here. The pack includes a lot of assets, but we’ll only be using a few for this and the next section. After downloading the zip, extract the files from the Tiny Swords (Update 010)
directory (in the zip) into our public/assets
directory. Once that is done, you will have public/assets/Deco
, public/assets/Factions
, and others ready to use.
We’ll use the Factions/Knights/Troops/Warrior/Blue/Warrior_blue.png
image for our player. You can open and see that the image contains individual frames of various animations, such as idle, walking, and attacking, laid out in a grid.
If we open one of the sprite sheets, we can see the available animations. Open Factions/Knights/Troops/Warrior/Blue/Warrior_blue.png
. Here, we have eight rows, each with six columns. The first row is idle/standing frames. If we look carefully, the knight is slowly bobbing up and down. The next row is the walking animation.
The next two rows are two attacks facing left, then the next two rows are two attacks facing front, and finally, the last two are two attacks facing backward. For now, we will only use the standing and walking animations. You will also notice that there is a .aseprite
file, which is a popular format for sprite sheets and animations. However, working with it and seeing it require another package and software, respectively, so for now, we will simply use the .png file. You can read more about its use here.
First, open the resources.ts
file. This file is used to load resources like images and sounds. In a larger game, you might split this up so that each level or stage only loads its necessary resources when it starts. But for our case, we’ll load everything at the beginning. By default, the Sword
resource is already loaded. Now, let’s add the Knight
resource:
export const Resources = {\\n Sword: new ImageSource(\\"./images/sword.png\\"),\\n Knight: new ImageSource(\\n \\"./assets/Factions/Knights/Troops/Warrior/Blue/Warrior_Blue.png\\"\\n ),\\n} as const;\\n\\n
Now, in Player.ts
, we’ll load it as follows. Keep in mind that this isn’t the best approach — normally, you’d define it in resources.ts
and import it separately. But for the sake of this example, we’ll do it this way.
First, add idleAnimation
as a member of our Player
class:
idleAnimation: Animation;\\n\\n
Then, in our constructor, we create a sprite sheet from this as follows:
\\nlet spriteSheet = SpriteSheet.fromImageSource({\\n image: Resources.Knight,\\n grid: {\\n rows: 8,\\n columns: 6,\\n spriteWidth: 192,\\n spriteHeight: 192,\\n },\\n});\\n\\n
Here, we specify the grid rows and columns as we saw above, and for the height and width of individual sprites, we divide the height and width of the whole image by rows and columns to get the numbers.
\\nThen, we create an animation from this and assign it to idleAnimation
as follows:
this.idleAnimation = Animation.fromSpriteSheet( spriteSheet, range(0, 5),\\n 100, AnimationStrategy.Loop\\n);\\n\\n
Next, we pass in the sprite sheet we created earlier, then specify the range of individual sprites that make up the animation.
\\nSprites are numbered starting from 0, beginning at the top left and moving right, row by row. So, the first row contains sprites 0, 1, 2, 3, 4, and 5, and the second row continues with 6, 7, 8, 9, 10, 11, and so on. Since the idle animation sprites are in the same row, we can define the range directly. However, if they were arranged in columns or scattered across different positions, we could use fromSpriteSheetCoordinates
instead of fromSpriteSheet
and provide their exact locations as an array.
The third parameter here is the number of milliseconds per frame, which is specified on the asset page above the license information, 100ms. Finally, we want to loop this animation continuously.
\\nFor the final step, we change our onInitialize
method to use this instead of our simple square:
onInitialize(engine: Engine): void {\\n this.graphics.use(this.idleAnimation);\\n ...\\n\\n
Now, if you run the game, you will see that instead of our square, we have the knight image!
\\n\\nYay! For an extra bit of fun, you can replace the levelSelector
sprite with a tower instead of the square, so it looks as if the knight walks into the tower for the next level.
You can also pass a scale
value in the super
call within Player
‘s constructor to adjust the sprite’s size. Right now, the scale is (1,1)
, meaning the sprite appears at its original size. If you set it to a value less than 1, the image will shrink accordingly:
...\\nscale:vec(0.5,0.5),\\n...\\n\\n
Because we still glide when moving, let’s add a walking animation. We’ll create more member variables to store the walking animations and separate the left and right-facing animations. Because you now know how to do the animation, the code below will be brief and show only the crucial steps.
\\nFirst, we’ll define the members we need:
\\nidleAnimationRight: Animation;\\nidleAnimationLeft: Animation;\\nwalkingAnimationLeft: Animation;\\nwalkingAnimationRight: Animation;\\nfacingRight: boolean;\\n\\n
Rename the current idleAnimation
to idleAnimationRight
. Then, in the constructor, create idleAnimationLeft
by cloning idleAnimationRight
and flipping it horizontally. We’ll also set facingRight
to true
:
this.facingRight = true;\\n...\\nthis.idleAnimationLeft = this.idleAnimationRight.clone();\\nthis.idleAnimationLeft.flipHorizontal = true;\\n\\n
You can use the same technique to create the walkingAnimationRight
and create walkingAnimationRight
by flipping it horizontally.
Now, in the update
method, we’ll change the previous if
s to a chain of if-else-if
and add an else
at the end. This else
will be the case when no key is pressed. We use the corresponding idle animation based on the facingRight
flag:
...} else {\\n if (this.facingRight) {\\n this.graphics.use(this.idleAnimationRight);\\n } else {\\n this.graphics.use(this.idleAnimationLeft);\\n }\\n}\\n\\n
In the check for the right arrow key, we set the facingRight
to true
, and use the walkingAnimationRight
animation:
if (engine.input.keyboard.isHeld(Keys.ArrowRight)) {\\n this.facingRight = true;\\n this.graphics.use(this.walkingAnimationRight);\\n this.pos = this.pos.add(vec(2, 0));\\n} else...\\n\\n
I also increase the position by 2 instead of 3 to slow the movement speed. Similarly, in the left key check, we set facingRight
to false
and use the walkingAnimationLeft
. Finally, for the up and down keys, we don’t have any separate animations, and we use the walking animations based on the facingRight
flag. The result would look like this:
As you can see, the knight correctly faces the direction of the key and uses the correct animation as well. Yay!
\\nIn this final part of the basic introduction, we’ll explore what a tilemap is and how to use it in our games. Tiles are small, repeatable images that can be combined to create a larger scene. A tileset is simply a collection of these individual tiles, usually arranged in a single image, much like a sprite sheet. A tilemap is a design made using these tiles, typically serving as the game’s background or level layout.
\\nHowever, when using special formats like .tmx
, we can attach properties to specific tiles or tile types. For example, we can mark border tiles as solid to prevent the player from walking through them or define a specific tile as the player’s starting position.
For this example, you can see the image assets/Terrin/Ground/Tilemap_Flat.png
. This consists of individual square tiles, which can be composed to create a larger level layout. We can load this up and split it into sprites and manually design the level programmatically one tile at a time; however, that would be extremely tedious and quite slow. Instead, we will use a popular program called Tiled
, which can be downloaded from here. We will then create a tilemap from the image we saw earlier and use it in our game.
Note that I will not be doing an in-depth explanation for Tiled
itself. You can refer to the docs for that. We will only review the steps relevant to our case.
First, open Tiled
and create a new map using File→New→New Map
:
Here, change the width and height in the map size section to 15 tiles each, and in the tile size, use the width and height as 64 px and then click OK. The tile width and height can be found on the asset page above the license section:
\\nThis will create an empty project. In the bottom right section, click on New Tileset…
. In the pop-up box, set the name as Flat Terrain
, select the tilemap_flat
image we saw above, and click on OK:
Now, if you adjust the size of the docks, you will be able to see the whole image, and the individual tiles will be selectable on hover and click:
\\nYou can select the specific tile you want and use it to draw directly on the grid:
\\nTo place the tiles on top of another like the grass tiles, we need to create another layer in the right top panel, select it, and then add the grass tiles. You can also use the bucket fill tool in the top bar to fill in the middle section once you are done adding the borders.
\\nWe can also specify some other details using the object layer. For example, we can specify the starting position of the player or the position of the enemies, etc. For that, select the object tab next to the layers and click on the add object layer icon in the toolbar above the tabs.
\\nNow you will have an object layer listed in the layers tab. Select that layer, and in the top toolbar, select the Insert Rectangle
tool. Then, you can click and drag to create a rectangle object:
In the left-top sidebar panel, give it a name and type (in newer versions, this will be called class) like Player
:
Then, in the File menu, click on Save As…
and select a location in our assets directory. Give it a name like level.tmx
and save.
Now, in our project directory, run the following npm command to add the tiled plugin:
\\nnpm install --save-exact @excaliburjs/plugin-tiled\\n\\n
Then, in resources.ts
, after the loop, we‘ll add the following:
export const TiledLevelMap = new TiledResource(\\"./assets/level.tmx\\");\\nloader.addResource(TiledLevelMap);\\n\\n
We also create a bare-bones level called TiledLevel
:
export class TiledLevel extends Scene {\\n constructor() {\\n super();\\n }\\n}\\n\\n
And in the main.ts
, we’ll add this level to scenes
:
scenes: {\\n level1: new Level1(player),\\n level2: new Level2(player),\\n level3: new Level3(player),\\n tiledLevel: TiledLevel,\\n},\\n\\n
Then, change the game.start
call as follows:
game\\n .start(\\"tiledLevel\\", {loader})\\n .then(() => {TiledLevelMap.addToScene(game.currentScene);});\\n\\n
If you run the game now, you will see that our tilemap is being used. However, there is no player or anything else. For that, we can use entity factories.
\\nIn the tiled resource creation, we pass an options object as follows:
\\nexport const TiledLevelMap = new TiledResource(\\"./assets/level.tmx\\", {\\n entityClassNameFactories: {\\n Player: (props: FactoryProps) => {\\n return new Player(vec(props.worldPos.x, props.worldPos.y));\\n },\\n },\\n});\\n\\n
We can also add custom properties to the Tiled
object, and we will get them via props.object.properties
. Here, we can set values for resources such as coins in the treasure box, the type of enemy, and so on.
Now, if we run the game, we will see this:
\\n\\nHowever, as you see in the end, our player can move beyond the boundaries as well as under the grass. For that, let’s edit the tilemap, move the borders of the map to a separate level called boundaries, and add a custom bool property solid
as true
:
We will also update the Player
constructor’s super call and pass in z
as 10 to make it appear on top of everything.
We also need to set the CollisionType
of the player to Active
so it can collide with other objects and be stopped by the solid objects:
super({\\n...\\n z: 10,\\n collisionType: CollisionType.Active,\\n scale: vec(0.5, 0.5),\\n});\\n\\n
Now, if you run this, you will see that the player is stopped by the boundary tiles instead of walking over them, and the player sprite is drawn over the grass instead of below it. Note that because of our tile size, we have a full square, which acts as a boundary instead of a thin strip at the end.
\\nWith this, we have covered all the concepts needed to make our first game in Excalibur. Now, let’s actually make our game!
\\nAs I mentioned at the start, one issue that I faced when learning game development was getting stuck after finishing the tutorial. I knew the basic concepts but wasn’t sure how to use them to make the game I wanted to make. So, to make it easier, I created a set of questions that loosely followed the engine loop and used them to decide what I needed to do next.
\\nThis is not all-encompassing, and you will still need to learn a lot and make more projects on your own to get to know the engine better. But with the basic concepts and the following steps, you’ll be able to graduate from simply copying a tutorial to making a project by yourself.
\\nWhen building a web app, we break it down into smaller, manageable parts — individual screens — and develop them one at a time. Similarly, we’ll break the game into smaller pieces and ask key questions for each part. While we’ll still need to iterate over everything to refine and create a cohesive gameplay experience, this approach helps make things more manageable (especially for your first project, where everything might feel overwhelming).
\\nFor each “part” of the game, ask yourself the following:
\\nI’ll refer to these as the “implementation questions” in the rest of the article, as these will help us think about how to implement a part of the game.
\\nIn the rest of this section, I will implement a very basic game using the concepts we learned above. While doing so, I will demonstrate how I think using our list of implementation questions. You can find the source code for each step in this repository.
\\nRemember, this isn’t the best way, but it’s a good enough way to get started and break free from tutorial hell. Instead of just following this example, come up with your own concept and apply the same thought process to build your game. Like beginner web projects, cloning a simple existing game can be a great way to start.
\\nFor my example, I’m creating a simple top-down fighting game where you control a character, battle enemies, and advance to the next level after defeating them all. Early on, I’ll focus more on level design and gameplay rather than UI elements, using simple placeholders for now and refining them later. Feel free to take a different approach!
\\nI’ll first delete all the existing .ts
files except main.ts
, player.ts
, resources.ts
, and tiledLevel.ts
. Because we were only using these four files at the end of the last section, there shouldn’t be any change in the game.
Let’s follow the questions and think about what should happen after the player clicks on the Play Game
button:
I created a new scene called LevelSelector
, and an actor called LevelIcon
. The LevelIcon
takes in a callback and its position and simply shows a square with the given label. When clicked, it will call the callback. In the level selector, I created two of these and added them to the level scene. For the callback, it simply logs in the level name for now, but after creating the levels, I will use engine.goToScene
to change the level:
Now, I want to design the first level, so let’s start again with our implementation questions:
\\nFor this, I’ll design a larger tilemap. I want the camera to focus on the player while staying within a bounding box to prevent showing empty space. To achieve this, I’ll implement a custom camera strategy and add it to the player’s onInitialize
method (more details can be found here). I’ll also reset the player’s scale to (1,1)
and increase their speed. The result will look something like this:
Next, we will add enemy characters to the level. Here are our implementation questions for this step in our game development process:
\\nNot every one of the implementation questions is applicable here. For this, the only change will be adding enemy characters, and their behavior will be set in the next step.
\\nSo, we will add another class for the enemy and load the sprite sheet accordingly. You can find these steps in the source good, as you can for each step in this process. After adding everything, we will see goblins in the scene:
\\nI’ll update the method in the goblin class to check if the player is within a certain distance. If the player is close enough, the goblin will start moving toward them and stop at a small distance. If the player moves too far away while being chased, the goblin will stop and stand still. The result will look like this:
\\n\\nAs you can see, the goblins follow the player and can walk over the hedges, while the player cannot. Next, we will add an attack for the player.
\\nFor this, I had to change the update
logic as well as add another class member called attacking
in the Player
class. I also shared Excalibur’s EventEmitter
between the player and all goblins so that the player can send an attack event that goblins can react to. There are other — and possibly better — ways to do this, but this will do for now.
After this, our game will look like this:
\\n\\nAs you can see, after three attacks, the goblins are gone. Now, on to goblin attacks!
\\nNow, the goblins attack the player, but their attacks are slower than the player’s and have a short cool-down, preventing continuous damage. I still need to add an animation to indicate when the player is hit.
\\nAnd that’s it for this post! There’s still plenty to improve — bugs to fix, UI and sound to add, etc. — but now you have a framework for thinking through your game development process.
\\nAs I mentioned earlier, this isn’t the ultimate guide to making a game. As your project grows more complex, these five implementation questions won’t always be enough, and you’ll need to make more thoughtful design choices. But this approach is a solid starting point to help you break free from tutorial hell and start building your first game.
\\nIn this post, we started with the basic concepts of Excalibur.js. After covering them, we saw how we can think in terms of five questions to implement your first game piece by piece. With this, you can begin your game development journey! Be sure to share it with me in the comments if you upload it somewhere.
\\nThe code for the demo is available in my GitHub repository here.
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nAngular and React are two of the most popular JavaScript frameworks in modern web development.
\\nAngular is a full-fledged framework based on TypeScript. It boasts a wide range of built-in features offering a structured approach to building web applications.
\\nReact, on the other hand, is a lightweight framework for building user interfaces.
\\nAngular is great for large, scalable, and maintainable projects that follow strict rules, while React offers more flexibility and is perfect for agile, fast-paced environments. Angular has a wider scope but is rigid, and React offers more freedom and creativity.
\\nFeature | \\nAngular | \\nReact | \\n
---|---|---|
Type | \\nTypeScript | \\nJavaScript (or JSX) | \\n
Data binding | \\nBidirectional | \\nUnidirectional | \\n
Routing | \\nBuilt-in Angular Router | \\nRequires React Router | \\n
State management | \\nBuilt-in with RxJS and Signals | \\nRequires third-party solutions, like Redux | \\n
Performance | \\nSlower because it requires compilation | \\nFaster because it uses the virtual DOM | \\n
Learning curve | \\nAngular has a steep learning curve | \\nReact is generally easier to learn | \\n
Angular is typically best for:
\\nReact tends to work best for:
\\nJQuery was the first true heavyweight JavaScript framework/library. It was released in 2006 and its main purpose was to address issues with browser compatibility, DOM manipulation, and AJAX support.
\\nThe 2010s saw the emergence of frontend JavaScript frameworks and libraries like AngularJS, React, Vue, Backbone.js, Ember.js, Svelte, Angular, and more. Fast-forward to the mid-2020s, and there are now over 80 JavaScript frameworks/libraries. Depending on who you ask, that number could be even higher.
\\nIf modern web development had a heavyweight division, Angular and React would be among the leading contenders. This post will explore the key differences between Angular and React, their strengths, and use cases to help developers decide which option to choose.
\\nIn the red corner, weighing in with a plethora of built-in features, TypeScript mastery, full-fledged enterprise-level monster, boasting a powerful CLI, and backed by tech giants Google…Aaaaangularrr!
\\nAnd now, in the blue corner, forged in the depths of Facebook, reborn as Meta, weighing in as a lightning-fast UI library, repping JavaScript, champion of flexibility and high performance, lord of the virtual DOM…Reeeact!
\\nLet’s get ready to…render?
\\nAngular is a full-fledged framework for building web applications. It’s TypeScript-based and was released by Google in 2016 as a complete rewrite of AngularJS to improve performance and scalability.
\\n\\nAngular can be described as having “batteries included” because of its many built-in features for routing, state management, form handling, dependency injection, and more.
\\nThe current version is Angular 19, and it introduced some changes, like standalone components now being true
by default, stabilized signal-related APIs, and support for incremental hydration in developer preview. You can learn more details in the Angular 19 release blog post.
Angular provides a solid structure for building reliable and scalable web applications:
\\n@defer
feature to be hydrated or lazy-loaded later. The client downloads less code, improving page load speedsng update
Angular enforces a specific way of structuring web applications. It plays by the rules and likes to follow best practices. It can be considered a strength or limitation depending on how you look at it. Angular enforces a well-defined architecture where you build applications using components and modules. It makes large applications easy to maintain but limits flexibility.
\\nReact is a JavaScript library for building UI components. It was developed and released by Facebook (Meta) in 2013. It’s a minimalist, lightweight but solid option for building web applications.
\\nReact is quick to set up but relies on a few third-party solutions to provide features like routing, authentication, and translation. It follows a component-based approach, empowering you to create reusable UI components.
\\nReact has several features that make it a reliable option for building web applications.
\\n\\nAngular and React share a few commonalities and concepts, like using a component-based system. However, there are key differences that set them apart.
\\nThe first major difference between Angular and React is their type. Angular is TypeScript-based, while React is JavaScript-based. However, TypeScript is well-supported on React.
\\nTypeScript is a superset of JavaScript, so it’s technically possible for you to build Angular applications with JavaScript, but it could lead to issues.
\\nAngular uses bidirectional or two-way binding, while React is unidirectional. When the UI changes in an Angular application, the corresponding modal state also changes. In React, data flows in one direction, from parent to child components.
\\nAngular has a first-party solution for routing (@angular/router
), while React requires React Router, a third-party solution.
Angular also has built-in state management solutions, Signals and RxJS, while React requires third-party solutions like Redux.
\\nTypeScript needs to be compiled into JavaScript before it gets to your browser. This doesn’t necessarily mean all Angular applications are slow, but React is faster due to its use of the Virtual DOM.
\\nAngular has a steeper learning curve compared to other JavaScript frameworks. It has many built-in features and complex concepts you’d need to learn. Angular also requires a solid understanding of TypeScript, which means you need a good understanding of JavaScript as well.
\\nYou only need an intermediate level of understanding of HTML, CSS, and JavaScript to use React. React’s JSX syntax might take some getting used to, but it’s generally considered easier to learn than Angular.
\\nAfter 12 rounds, the contenders are still standing, and it’s down to the judges to decide. But how do we decide?
\\nIf this were the court of public opinion, React would take the crown. It tops virtually every popularity and desirability poll/survey, not to mention its 234k GitHub stars to Angular’s 97.3k (1k = 1000). However, it’s best to look at them objectively.
\\nBoth Angular and React offer solid options for building modern web applications. It’s not black and white; the choice comes down to your project’s needs and requirements.
\\nAngular is the wise, old head, providing stability and security. If these are your preferences, then Angular is a good fit. Angular is best suited for enterprise applications that require structure and a scalable architecture. It’s also well-suited for large teams because it can handle many tasks at the same time. Angular can be used to build complex applications that require long-term support and maintainability.
\\nReact is a great choice in dynamic, fast-moving environments. It’s suited for applications that require quick development and fast updates. React is perfect for small teams and startups that prioritize agility and flexibility.
\\nYou can build a small application with Angular, but that might be overkill. You can also build an enterprise-level application with React, but it might not keep its shape if you try to expand.
\\nThere’s no clear winner, and we can’t call it a draw either. Consider what your project needs, and you’ll know which one to choose.
\\n\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nexit code 1
in Docker?\\n exit code 1
\\n exit code 1
\\n Nobody likes to see errors, but they’re inherent in all software programs. Issues may pop up, whether you’re running simple scripts or managing large-scale apps.
\\nDocker uses exit codes to propagate errors in processes. exit code 1
is one of the most common and frustrating errors for developers. Let’s explore what it means and how to fix it.
exit code 1
in Docker?exit code 1
is the error code for signaling that a container’s process has been terminated due to a general or unspecified error.
When you execute processes in your Docker containers, you may have noticed an exit code 0
, which means your process was successful. In other cases where your container terminates, you’ll get an exit code 1
.
When your container processes terminate with an exit code, the code indicates the process’s status (failed or completed).
\\nexit code 1
is usually a general error or failure. If you’re using Docker, a process has either failed or there’s a general error. You must pinpoint which is best and solve the underlying issue.
Some of the common reasons you’ll get an exit code 1
error are:
exit code 1
from the containerexit code 1
ENTRYPOINT
or CMD
— The entry point or command is invalid in the container specifications fileexit code 1
Let’s reproduce exit code 1
with this scenario so you can understand how it happens.
First, create a bash script and name it failing.sh
, then add these commands:
#!/bin/bash\\necho \\"This script will fail with Exit Code 1.\\"\\nexit 1\\n\\n
Now, execute this command to make the script executable:
\\nchmod +x failing.sh\\n\\n
Next, create a Dockerfile in the same directory and enter these directives.
\\n# Use a lightweight base image\\nFROM alpine:latest\\n\\n# Copy the failing script into the container\\nCOPY failing.sh /failing.sh\\n\\n# Set the script as the default command\\nCMD [\\"/failing.sh\\"]\\n\\n
The Dockerfile copies the script into the container and runs it as the default command.
\\nFinally, run the build
command to build the container:
docker build -t exit-code-1-demo .\\n\\n
If you’ve done everything right, you should get a resounding error with exit code 1
:
The error arises because the failing.sh script explicitly exists with exit code 1
signaling a failure. When the container starts, it executes the script as the default comm2
and terminates with a non-zero exit.
Since Docker treats non-zero exit codes as failures, it reports the failure accordingly.
\\nIn this case, we’ve simulated a process failure, but in development, yours will be real, and you’ll need to troubleshoot.
\\n\\nexit code 1
Once you’ve received an error with exit code 1
, you want to troubleshoot it and find the root cause to get your app running. Here are some practices you can follow to troubleshoot the errors.
You should inspect the container’s state with the ps
command to gather more information about its lifecycle, especially why it might have terminated:
docker ps -a\\n\\n
The command lists all containers, including the exited ones. To troubleshoot, look for the specific container’s status and exit code.
\\nWhen you receive an error, you should first check your logs. Sometimes, you’ll find intuitive logs about the error that could help you resolve it:
\\ndocker logs <container_id>\\n\\n
The command should display the output and error messages from the container. Then, you can look for error messages, stack traces, missing files or dependencies, and permission errors.
\\nENTRYPOINT
and CMD
Another way to get an exit code 1
is to miss the correct entry point or command. You might have specified incorrect paths to scripts or executables, or had missing or invalid arguments.
Check the CMD
or ENTRYPOINT
directives in your Dockerfile and make sure they’re valid.
CMD [\\"/failing.sh\\"] \\n\\n
In this case, the CMD
directive is the same as the example we used to reproduce the exit code 1
error. However, it’s valid.
You need to make sure you’re installing all the dependencies your project requires to run.
\\n\\nYou can use the exec
command to inspect the container’s file system and browse through to further check for missing files and dependencies:
docker exec -it <container_id> /bin/sh|\\n\\n
Also, make sure you’re passing the right environment variables. You can use the COPY
directive or specify the environment variable in your Docker Compose file.
exit code 1
Let’s overview some fixes for exit code 1
to get you deploying your apps on the fly:
Application errors are one of the common causes of exit code 1
. You can easily fix them by debugging your apps and handling exceptions.
Log at different levels during development to capture detailed messages and handle exceptions to prevent crashes.
\\nMake sure you test locally before containerizing to pinpoint where the problem originates. If you don’t have any errors testing locally, then it’s probably from the container. You’ll want to spin up a container with configurations identical to your local environment.
\\nYour deployment will likely fail if you misplace the environment variables it needs to run.
\\nYou can check your Docker container’s environment variables with the inspect
command:
docker inspect <container_id> --format \'{{ .Config.Env }}\'\\n\\n
Make sure you’re specifying the environment variables in your Dockerfile with the ENV
directive:
ENV MY_VAR=my_value\\n\\n
Also, consider providing default values for environment variables in your app’s implementation and handle their exceptions gracefully.
\\nResource constraints as your project grows may result in an error with exit code 1
.
You can use docker stats
to monitor the resource usage of your container like this:
docker stats <container_id>\\n\\n
You can go further to adjust memory and resource limits with:
\\ndocker run --memory=\\"512m\\" --cpus=\\"1\\" my-image\\n\\n
In this case, the command runs a Docker container from my-image
, limiting memory usage to 512MB and CPU usage to 1 core.
If the issue is a failed dependency, you can always interact with the container to find and fix the issues. The Docker exec
command is great in this regard:
docker exec -it <container_id> /bin/sh\\nls -l /path/to/dependency\\n\\n
You can rebuild with the build
command once you’ve found and fixed the issues.
Gracefully, Docker provides restart policies for you to specify what happens when your container’s processes fail.
\\nYou can configure restart policies so your container recovers from failures in your Docker Compose YAML file:
\\nPolicy | \\nExecution | \\n
no | \\nThe container won’t restart (default) | \\n
always | \\nThe container always restarts regardless of the exit code | \\n
on-failure | \\nThe container restarts on non-zero exit codes like exit code 1 | \\n
unless-stopped | \\nThe container would always restart except it’s explicitly stopped | \\n
Here’s how you can specify the restart policy in your docker-compose.yml
file:
version: \'3.8\'\\nservices:\\n my-service:\\n image: my-image\\n\\n\\n # use any one of these\\n restart: \\"no\\"\\n restart: \\"always\\"\\n restart: \\"on-failure\\"\\n restart: \\"on-failure:5\\" # Retry up to 5 times\\n restart: \\"unless-stopped\\"\\n\\n
You can do this for any of the services you’re orchestrating via Docker Compose, and it should work as expected.
\\nIn this article, you’ve learned what causes errors with exit code 1
and how you can mitigate them in your development and deployment processes.
In production and staging, use these restart policies to make sure your app is always available for your users.
\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\n3D web development is taking the internet by storm, with both developers and non-developers creating 3D web experiences using AI coding tools.
\\nThis is fascinating, considering 3D development used to be an area only a select few developers could work in. But now, even non-developers are “vibe coding” entire 3D web games, AR/VR applications, and virtual e-commerce experiences with just a few prompts.
\\nIn this article, we’ll cover what’s happening, where 3D web development started, how it has changed over the years, and how AI is revolutionizing the space even further. We’ll also explore some of the AI tools you can use to build 3D web experiences, the current and future challenges AI presents in 3D web development, and how things might change in the future.
\\nThe web was originally designed for text and static images. It was built to focus on sharing documents rather than creating interactive experiences. Any interactivity beyond images in the early web required plugins like Flash or Java applets. Even then, those tools didn’t handle 3D particularly well.
\\n3D development on the web began to gain more ground around the late 1990s to early 2000s, especially with the introduction of tools like VRML (Virtual Reality Modeling Language), Shockwave 3D, and Java OpenGL.
\\nMost of these tools had a gentle learning curve, code-wise. However, they had performance issues, as they relied entirely on the CPU. Plus, you would need to set up additional plugins to get them to run on the web.
\\nFor context, here’s what a sample code for rendering a blue 3D box looks like in Shockwave 3D:
\\non create3DBox()\\n -- Create a new 3D world\\n w = member(\\"3Dworld\\").newWorld()\\n \\n -- Create a cube model\\n m = w.newModel(\\"cube\\", #box)\\n \\n -- Set the size of the cube\\n m.transform.scale = vector(100, 100, 100)\\n \\n -- Set the color to blue\\n m.shaderList[1].diffuse = rgb(0, 0, 255)\\n \\n -- Add a light source\\n light = w.newLight(\\"light\\", #directional)\\n light.color = rgb(255, 255, 255)\\n light.transform.rotation = vector(45, 45, 0)\\n\\n -- Position the camera\\n cam = w.newCamera(\\"camera\\")\\n cam.transform.position = vector(0, 0, -300)\\n\\n -- Render the scene\\n w.render()\\nend\\n
But before you could get the code to run on the web, you’d have to build the scene inside Adobe Director, export it as a .dcr
file, embed it into your webpage, and make sure the user had the Shockwave Player plugin installed. And even when all that was in place, performance was disappointing.
In 2011, the Khronos Group introduced WebGL (Web Graphics Library), and it revolutionized 3D on the web. WebGL was different in that it allowed browsers to access the computer’s GPU directly and leverage hardware acceleration; this made it possible to render performant 3D in real time without any additional plugins.
\\nThis shift also introduced libraries like Three.js and Babylon.js, both of which further simplified WebGL with easy-to-use APIs and SDKs. Their major advantage is that they are plug-and-play. All you need to do is add their JavaScript library to your HTML file, and things just work.
\\nOver time, 3D on the web became more developer-friendly, and adoption grew. But what continued to limit developers from creating immersive 3D experiences was limited knowledge of 3D asset creation, shaders, scene setup, and more.
\\nGenerative AI and LLMs are now starting to change that.
\\nAI is lowering the technical barriers in 3D web development. Instead of spending hours creating models, textures, and animations, you can now leverage AI tools to handle the heavy lifting. For example, with Windsurf/Cursor, you only need to describe the experience you want to build, and it builds it for you.
\\nTo try it out, I wanted to recreate the famous Chrome Dino game in 3D. All I had to do was describe it in detail and feed the prompt into Windsurf, as shown in the image below:
Next, it created the needed files and pasted all the necessary code into their respective locations:
After following all the instructions and running the app, it just worked!:
But that’s not all. We now have tools that can generate 3D assets from static images or text prompts, similar to how generative image models like DALL-E, Google’s Imagen, and Midjourney work.
\\n\\nTo improve the 3D dino game, I first asked ChatGPT to generate a low-poly T-Rex image for me:
Next, I removed the image background and sent it to Hyper3D Ronin to convert it into a 3D model:
After the conversion, I downloaded the model’s .glb
file, copied it into my project directory, and prompted Windsurf to replace the default box character with the new 3D model, as shown below:
And again, it just works. It’s like magic:
You can also play the game here or explore the code via this GitHub repo.
\\nWhile the recent trend has mostly focused on creating 3D web games, it’s important to remember that 3D web development goes beyond that. There are plenty of other use cases, such as:
\\nThese are all important application areas solving real problems, and it’ll be exciting to see more people exploring them instead of just sticking to gaming.
\\nAll these exciting shifts in development experience come with their challenges. Let’s explore some of them below.
\\nYes, non-developers can open Cursor or Windsurf and start vibe coding what they want to build. The reality, though, is that the AI and LLMs powering these tools often hallucinate or generate code that doesn’t work. And for someone who doesn’t understand programming basics, figuring out what went wrong and how to fix it can be a dead-end experience.
\\nLLMs are good at writing clean code, but they’re just as good at writing bad code, too. In their eagerness to help you get things working, they might trade off performance or security for quick fixes. Over time, this can lead to bloated or low-quality codebases.
\\nFor example, a user shared their experience on X (Twitter) about how their platform, built with AI coding tools, was compromised. Fixing the issue wasn’t straightforward, as shown in the image below:
This event buttresses the earlier points about the knowledge gap and quality control. AI tools can speed things up, but they don’t replace the need for a solid understanding of how things work under the hood, at least not yet.
\\nThere’s also the ethical concern around ownership. The code and 3D assets these AI tools generate in seconds are trained on the hard work of other developers and artists, many of whom never gave explicit permission for their work to be used. This raises questions about who really owns the output, whether it’s truly original, and what rights the original creators should have.
\\nThe AI tools for 3D web development fall into two categories:
\\nLLMs and coding assistants for building the experience, and generative models for converting text or images into 3D assets.
\\nFor development, we have tools like Claude, Windsurf, and Cursor:
\\nClaude is better fine-tuned for coding tasks compared to other general-purpose LLMs like OpenAI’s ChatGPT and Google’s Gemini, and it’s especially good at writing JavaScript—Three.js and Babylon.js included.
\\nWindsurf, as demonstrated in our example, is an AI coding tool that lets you build in a VS Code-like environment with support for models like Claude, GPT-4, and others.
\\nCursor provides a similar AI-first coding experience, with strong autocomplete, inline explanations, and multi-model support.
\\nFor generating 3D assets, we have tools like Hyper3D Rodin, Tripo3D, Meshy.ai, and Hunyuan3D-2. Hyper3D Rodin lets you transform text or 2D images into 3D models, as shown in our example. Tripo3D and Meshy.ai work similarly. Hunyuan3D-2, on the other hand, is an open-source model developed by Tencent that focuses on generating high-fidelity 3D assets from text with detailed geometry and textures.
\\n3D on the web will only continue to get better. To back this up, there has been progress on introducing WebGPU – a newer, more performant, and lower-level web graphics API that aims to replace WebGL. In addition, there’s a new WebXR API that brings native 3D and AR/VR support to the browser without requiring any additional libraries. Chromium-based browsers like Chrome and Edge already support it, and over time, other browsers will likely follow.
\\nSimilarly, this is the worst these AI tools and models will ever be. While they may not be perfectly accurate right now, they’re being improved constantly and will only get better. In the future, AI will make 3D web development more automated and accessible with realistic and interactive experiences.
\\nIn this tutorial, we covered the evolution of 3D web development and how AI is making it easier to create 3D experiences with a practical example. We also explored some of the AI tools you can use to build 3D applications, as well as the current and future challenges surrounding AI 3D web development.
\\nIt’s genuinely exciting to think about how much easier AI makes creating immersive 3D experiences in the future!
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nfetch()
\\n fetch()
\\n When it comes to making HTTP requests, developers often ask a common question: which is better, Axios or the Fetch API?
\\nAxios and the Fetch API help developers handle HTTP
/GET
/POST
/etc. requests in JavaScript. Understanding these technologies’ strengths, differences, and use cases is crucial for modern web development.
Here are some differences worth noting between the two solutions:
\\nCharacteristic | \\nFetch API | \\nAxios library | \\n
---|---|---|
Origin | \\nNative JavaScript API | \\nThird-party library | \\n
Installation | \\nNatively available to browsers and Node.js v18+ | \\nRequires npm install | \\n
JSON parsing (see code below) | \\nManual (need to use .json()) | \\nAutomatic | \\n
Error handling | \\nMinimal (only network errors) | \\nComprehensive | \\n
Request interceptors | \\nNot available (see this article to implement them in fetch() ) | \\nAvailable | \\n
Request cancellation (see code below) | \\nRequires AbortController | \\nBuilt-in method | \\n
Response transformation | \\nManual | \\nAutomatic | \\n
Platform support | \\nOnce browser-only but now available in Node.js v18+ | \\nBrowser and Node.js | \\n
Some developers prefer Axios over built-in APIs for their ease of use. But many overestimate the need for such a library. The Fetch API is perfectly capable of reproducing the key features of Axios, and it has the added advantage of being readily available in all modern browsers.
\\nIn this article, we’ll compare fetch()
and Axios to see how they can be used to perform different tasks. At the end of the article, you should have a better understanding of both APIs.
Editor’s note: This article was updated by Elijah Agbonze in April 2025 to add a decision framework for developers evaluating Axios and the Fetch API, and cover more advanced technical scenarios such as handling timeouts, cancellation requests, and streaming.
\\nfetch()
Before we delve into more advanced features of Axios, let’s compare its basic syntax to fetch()
.
\\nHere’s how you can use Axios to send a POST
request with custom headers to a URL. Axios automatically converts the data to JSON, so you don’t have to:
// axios\\n\\nconst url = \'https://jsonplaceholder.typicode.com/posts\'\\nconst data = {\\n a: 10,\\n b: 20,\\n};\\naxios\\n .post(url, data, {\\n headers: {\\n Accept: \\"application/json\\",\\n \\"Content-Type\\": \\"application/json;charset=UTF-8\\",\\n },\\n })\\n .then(({data}) => {\\n console.log(data);\\n});\\n\\n
Now compare this code to the fetch()
version, which produces the same result:
// fetch()\\n\\nconst url = \\"https://jsonplaceholder.typicode.com/todos\\";\\nconst options = {\\n method: \\"POST\\",\\n headers: {\\n Accept: \\"application/json\\",\\n \\"Content-Type\\": \\"application/json;charset=UTF-8\\",\\n },\\n body: JSON.stringify({\\n a: 10,\\n b: 20,\\n }),\\n};\\nfetch(url, options)\\n .then((response) => response.json())\\n .then((data) => {\\n console.log(data);\\n });\\n\\n
Notice that:
\\nfetch()
uses the body
property for a post request to send data to the endpoint, while Axios uses the data
propertyfetch()
is transformed into a string using the JSON.stringify
methodfetch()
you have to call the response.json
method to parse the data to a JavaScript objectfetch()
method, the final data can be named any variablefetch()
handle headers in the same wayOne of the main selling points of Axios is its wide browser support. Even old browsers like IE11 can run Axios without any issues. This is because it uses XMLHttpRequest
under the hood. The Fetch API, on the other hand, only supports Chrome 42+, Firefox 39+, Edge 14+, and Safari 10.1+ (you can see the full compatibility table on CanIUse.com).
If your only reason for using Axios is backward compatibility, you don’t need an HTTP library. Instead, you can use fetch()
with a polyfill to implement similar functionality on web browsers that don’t support fetch()
.
To use the fetch()
polyfill, install it via the npm command like so:
npm install whatwg-fetch --save\\n\\n
Then, you can make requests like this:
\\nimport \'whatwg-fetch\'\\nwindow.fetch(...)\\n\\n
Keep in mind that you might also need a promise polyfill in some old browsers.
\\nAs we saw earlier, Axios automatically stringifies the data when sending requests (though you can override the default behavior and define a different transformation mechanism). When using fetch()
, however, you’d have to do it manually.
Compare the two below:
\\n// axios\\naxios.get(\'https://api.github.com/orgs/axios\')\\n .then(response => {\\n console.log(response.data);\\n }, error => {\\n console.log(error);\\n });\\n\\n// fetch()\\nfetch(\'https://api.github.com/orgs/axios\')\\n .then(response => response.json()) // one extra step\\n .then(data => {\\n console.log(data) \\n })\\n .catch(error => console.error(error));\\n\\n
Automatic data transformation is a nice feature, but again, it’s not something you can’t do with fetch()
.
One of Axios’s key features is its ability to intercept HTTP requests. HTTP interceptors come in handy when you need to examine or change HTTP requests from your application to the server or vice versa (e.g., logging, authentication, or retrying a failed HTTP request).
\\nWith interceptors, you won’t have to write separate code for each HTTP request. HTTP interceptors are helpful when you want to set a global strategy for how you handle requests and responses.
\\nHere’s how you can declare a request interceptor in Axios:
\\naxios.interceptors.request.use((config) => {\\n // log a message before any HTTP request is sent\\n console.log(\\"Request was sent\\");\\n return config;\\n});\\n\\n// sent a GET request\\naxios.get(\\"https://api.github.com/users/sideshowbarker\\").then((response) => {\\n console.log(response.data);\\n});\\n\\n
In this code, the axios.interceptors.request.use()
method defines code to be run before an HTTP request is sent. Also, axios.interceptors.response.use()
can be used to intercept the response from the server. Let’s say there is a network error; using the response interceptors, you can retry that same request using interceptors.
By default, fetch()
doesn’t provide a way to intercept requests, but it’s not hard to come up with a workaround. You can overwrite the global fetch()
method and define your interceptor, like this:
fetch = (originalFetch => {\\n return (...arguments) => {\\n const result = originalFetch.apply(this, arguments);\\n return result.then(console.log(\'Request was sent\'));\\n };\\n})(fetch);\\n\\nfetch(\'https://api.github.com/orgs/axios\')\\n .then(response => response.json())\\n .then(data => {\\n console.log(data) \\n });\\n\\n
fetch()
Progress indicators are very useful when loading large assets, especially for users with slow internet. Previously, JavaScript programmers used the XMLHttpRequest.onprogress
callback handler to implement progress indicators.
Implementing a progress indicator in Axios is simple, especially if you use the Axios Progress Bar module. First, you need to include the following style and scripts:
\\n<!-- the head of your HTML --\x3e\\n<link\\n rel=\\"stylesheet\\"\\n type=\\"text/css\\"\\n href=\\"https://cdn.rawgit.com/rikmms/progress-bar-4-axios/0a3acf92/dist/nprogress.css\\"\\n/>\\n\\n<!-- the body of your HTML --\x3e\\n<img id=\\"img\\" />\\n<button onclick=\\"downloadFile()\\">Get Resource</button>\\n<script src=\\"https://unpkg.com/axios/dist/axios.min.js\\"></script>\\n<script src=\\"https://cdn.rawgit.com/rikmms/progress-bar-4-axios/0a3acf92/dist/index.js\\"></script>\\n\\n<!-- add the following to customize the style --\x3e\\n<style>\\n #nprogress .bar {\\n background: red !important;\\n }\\n #nprogress .peg {\\n box-shadow: 0 0 10px red, 0 0 5px red !important;\\n }\\n #nprogress .spinner-icon {\\n border-top-color: red !important;\\n border-left-color: red !important;\\n }\\n</style>\\n\\n
Then you can implement the progress bar like this:
\\n<script type=\\"text/javascript\\">\\n loadProgressBar();\\n\\n function downloadFile() {\\n getRequest(\\n \\"https://fetch-progress.anthum.com/30kbps/images/sunrise-baseline.jpg\\"\\n );\\n }\\n\\n function getRequest(url) {\\n axios\\n .get(url, { responseType: \\"blob\\" })\\n .then(function (response) {\\n const reader = new window.FileReader();\\n reader.readAsDataURL(response.data);\\n reader.onload = () => {\\n document.getElementById(\\"img\\").setAttribute(\\"src\\", reader.result);\\n };\\n })\\n .catch(function (error) {\\n console.log(error);\\n });\\n }\\n</script>\\n\\n
This code uses the FileReader
API to asynchronously read the downloaded image. The readAsDataURL
method returns the image’s data as a Base64-encoded string, which is then inserted into the src
attribute of the img
tag to display the image.
Alternatively, if you wish to control the pace of the progress bar, Axios provides an onDownloadProgress
event that tracks the progress of your download:
<!-- the head of your HTML --\x3e\\n<link\\n rel=\\"stylesheet\\"\\n type=\\"text/css\\"\\n href=\\"https://cdn.rawgit.com/rikmms/progress-bar-4-axios/0a3acf92/dist/nprogress.css\\"\\n/>\\n\\n<!-- the body of your HTML --\x3e\\n<img id=\\"img\\" />\\n<button onclick=\\"downloadFile()\\">Get Resource</button>\\n<script src=\\"https://unpkg.com/axios/dist/axios.min.js\\"></script>\\n<script src=\\"https://cdnjs.cloudflare.com/ajax/libs/nprogress/0.2.0/nprogress.min.js\\"></script>\\n\\n<script type=\\"text/javascript\\">\\n function downloadFile() {\\n getRequest(\\n \\"https://fetch-progress.anthum.com/30kbps/images/sunrise-baseline.jpg\\"\\n );\\n }\\n\\n function getRequest(url) {\\n NProgress.start();\\n axios\\n .get(url, {\\n responseType: \\"blob\\",\\n onDownloadProgress: (progressEvent) => {\\n const percent = Math.round(\\n (progressEvent.loaded * 100) / progressEvent.total\\n );\\n NProgress.set(percent / 100);\\n console.log(`Downloaded ${percent}%`);\\n },\\n })\\n .then(function (response) {\\n NProgress.done();\\n document\\n .getElementById(\\"img\\")\\n .setAttribute(\\"src\\", URL.createObjectURL(response.data));\\n })\\n .catch(function (error) {\\n NProgress.done();\\n console.log(error);\\n });\\n }\\n</script>\\n\\n
In this example, instead of making use of Axios Progress Bar, which doesn’t provide custom control, we make use of NProgress. This library is what Axios Progress Bar uses behind the hood. It also comes with the ability to customize the style of the progress bar.
\\nOn the other hand, the Fetch API doesn’t have an onprogress
nor an onDownloadProgress
event. Instead, it provides an instance of ReadableStream
via the body property of the response object.
\\nThe following example illustrates the use of ReadableStream
to provide users with immediate feedback during image download:
<!-- the head of your HTML --\x3e\\n<link\\n rel=\\"stylesheet\\"\\n type=\\"text/css\\"\\n href=\\"https://cdn.rawgit.com/rikmms/progress-bar-4-axios/0a3acf92/dist/nprogress.css\\"\\n/>\\n\\n<!-- the body of your HTML --\x3e\\n<img id=\\"img\\" />\\n<button onclick=\\"downloadFile()\\">Get Resource</button>\\n<script src=\\"https://cdnjs.cloudflare.com/ajax/libs/nprogress/0.2.0/nprogress.min.js\\"></script>\\n\\n<script type=\\"text/javascript\\">\\n function downloadFile() {\\n getRequest(\\n \\"https://fetch-progress.anthum.com/30kbps/images/sunrise-baseline.jpg\\"\\n );\\n }\\n\\n function getRequest(url) {\\n NProgress.start();\\n fetch(url)\\n .then((response) => {\\n if (!response.ok) {\\n throw Error(response.status + \\" \\" + response.statusText);\\n }\\n // ensure ReadableStream is supported\\n if (!response.body) {\\n throw Error(\\"ReadableStream not yet supported in this browser.\\");\\n }\\n // store the size of the entity-body, in bytes\\n const contentLength = response.headers.get(\\"content-length\\");\\n // ensure contentLength is available\\n if (!contentLength) {\\n throw Error(\\"Content-Length response header unavailable\\");\\n }\\n // parse the integer into a base-10 number\\n const total = parseInt(contentLength, 10);\\n let loaded = 0;\\n return new Response(\\n // create and return a readable stream\\n new ReadableStream({\\n start(controller) {\\n const reader = response.body.getReader();\\n read();\\n function read() {\\n reader\\n .read()\\n .then(({ done, value }) => {\\n if (done) {\\n controller.close();\\n NProgress.done();\\n return;\\n }\\n loaded += value.byteLength;\\n NProgress.set(loaded / total);\\n controller.enqueue(value);\\n read();\\n })\\n .catch((error) => {\\n console.error(error);\\n controller.error(error);\\n });\\n }\\n },\\n })\\n );\\n })\\n .then((response) => {\\n // construct a blob from the data\\n return response.blob();\\n })\\n .then((data) => {\\n NProgress.done();\\n // insert the downloaded image into the page\\n document\\n .getElementById(\\"img\\")\\n .setAttribute(\\"src\\", URL.createObjectURL(data));\\n })\\n .catch((error) => {\\n NProgress.done();\\n console.error(error);\\n });\\n }\\n</script>\\n\\n
To make multiple, simultaneous requests, Axios provides the axios.all()
method. Simply pass an array of requests to this method, then use axios.spread()
to assign the properties of the response array to separate variables:
axios.all([\\n axios.get(\'https://api.github.com/users/iliakan\'), \\n axios.get(\'https://api.github.com/users/taylorotwell\')\\n])\\n.then(axios.spread((obj1, obj2) => {\\n // Both requests are now complete\\n console.log(obj1.data.login + \' has \' + obj1.data.public_repos + \' public repos on GitHub\');\\n console.log(obj2.data.login + \' has \' + obj2.data.public_repos + \' public repos on GitHub\');\\n}));\\n\\n
You can achieve the same result by using the built-in Promise.all()
method. Pass all fetch``()
requests as an array to Promise.all()
. Next, handle the response by using an async
function, like this:
Promise.all([\\n fetch(\'https://api.github.com/users/iliakan\'),\\n fetch(\'https://api.github.com/users/taylorotwell\')\\n])\\n.then(async([res1, res2]) => {\\n const a = await res1.json();\\n const b = await res2.json();\\n console.log(a.login + \' has \' + a.public_repos + \' public repos on GitHub\');\\n console.log(b.login + \' has \' + b.public_repos + \' public repos on GitHub\');\\n})\\n.catch(error => {\\n console.log(error);\\n});\\n\\n
Response management is a critical part of every application invoking an API. In this section, we will briefly look at the two aspects of it: getting the error code and manipulating response data.
\\nError management is different in Axios and the Fetch API. Specifically, fetch()
doesn’t automatically reject the promise
in the event of server-side errors, such as HTTP 404 or 500 status codes. This means that these errors don’t trigger the .catch()
block, unlike in Axios, where such responses would typically be considered exceptions.
Instead, fetch()
will resolve the promise
normally with the ok
status in the response set to false
. The call to fetch()
will only fail on network failures or if anything has prevented the request from completing.
In the following code, you can see how to handle errors in fetch()
:
try {\\n const res = await fetch(\'...\');\\n\\n if (!res.ok) {\\n // Error on the response (5xx, 4xx)\\n switch (res.status) {\\n case 400: /* Handle */ break;\\n case 401: /* Handle */ break;\\n case 404: /* Handle */ break;\\n case 500: /* Handle */ break;\\n }\\n }\\n\\n // Here the response can be properly handled\\n} catch (err) {\\n // Error on the request (Network error)\\n}\\n\\n
Meanwhile, in Axios, you can discriminate all errors in a proper catch
block as shown in the following example:
try {\\n let res = await axios.get(\'...\');\\n // Here the response can be properly handled\\n} catch (err) {\\n if (err.response) {\\n // Error on the response (5xx, 4xx)\\n } else if (err.request) {\\n // Error on the request (Network error)\\n }\\n}\\n\\n
Once the request has been served with a proper response without any errors, you can handle the response payload that will be accessible by using two different mechanisms.
\\nIn fetch()
, the request/response payload is accessible in the body
field and must be stringified, while in Axios it is in the data
field as a proper JavaScript object. This difference is captured in the following, stripped-down examples:
// Using Fetch API\\nfetch(\'...\')\\n .then(response => response.json())\\n .then(data => console.log(data))\\n .catch(error => console.error(\'Error:\', error)); \\n\\n\\n // Using Axios\\naxios.get(\'...\')\\n .then(response => console.log(response.data))\\n .catch(error => console.error(\'Error:\', error));\\n\\n
The key difference in fetch()
lies in the use of the .json()
method. Despite the name, this method does not produce JSON. Instead, it will take JSON as an input and parse it to produce a JavaScript object.
In this section, we will look into some advanced use cases of Axios and fetch()
, like handling response timeouts, cancelling requests, and streaming requests. You’ll often need these features in real-world applications.
The simplicity of setting a timeout in Axios is one of the reasons some developers prefer it to fetch()
. In Axios, you can use the optional timeout
property in the config object to set the number of milliseconds before the request is aborted.
Here’s an example:
\\naxios\\n .get(\\n \\"https://overpass-api.de/api/interpreter?data=\\\\[out:json];way[highway\\\\](40.5,-74,41,-73.5);out qt 30000;\\",\\n { timeout: 4000 }\\n )\\n .then((response) => {\\n console.log(response);\\n })\\n .catch((error) => console.error(\\"timeout exceeded\\"));\\n\\n
fetch()
provides similar functionality through the AbortController
interface. However, it’s not as simple as the Axios version:
const controller = new AbortController();\\nconst options = {\\n method: \\"GET\\",\\n signal: controller.signal,\\n};\\n\\nconst promise = fetch(\\n \\"https://overpass-api.de/api/interpreter?data=\\\\[out:json];way[highway\\\\](40.5,-74,41,-73.5);out qt 30000;\\",\\n options\\n);\\nconst timeoutId = setTimeout(() => controller.abort(), 4000);\\n\\npromise\\n .then((response) => {\\n console.log(response);\\n })\\n .catch((error) => console.error(\\"timeout exceeded\\"));\\n\\n
Here, we created an AbortController
object which allows us to abort the request later using the abort()
method. Signal
is a read-only property of AbortController
, providing a means to communicate with a request or abort it. If the server doesn’t respond in less than four seconds, controller.abort()
is called, and the operation is terminated.
As we’ve just seen, we can make use of the abort()
method of AbortController
to cancel requests made with the Fetch API:
const controller = new AbortController();\\n\\nconst getRequest = () => {\\n const options = {\\n method: \\"GET\\",\\n signal: controller.signal,\\n };\\n\\n fetch(\\n \\"https://overpass-api.de/api/interpreter?data=\\\\[out:json];way[highway\\\\](40.5,-74,41,-73.5);out qt 10000;\\",\\n options\\n )\\n .then((response) => {\\n console.log(response);\\n })\\n .catch((error) => console.error(error));\\n};\\n\\nconst cancelRequest = () => {\\n controller.abort();\\n};\\n\\n
In Axios, an optional cancelToken
property in the config object is provided to allow cancelling requests:
const source = axios.CancelToken.source();\\n\\nconst getRequest = () => {\\n axios\\n .get(\\n \\"https://overpass-api.de/api/interpreter?data=\\\\[out:json];way[highway\\\\](40.5,-74,41,-73.5);out qt 10000;\\",\\n {\\n cancelToken: source.token,\\n }\\n )\\n .then((response) => {\\n console.log(response);\\n })\\n .catch((error) => console.error(error));\\n};\\n\\nconst cancelRequest = () => {\\n source.cancel();\\n};\\n\\n
We’ve been introduced to streaming when we looked at download progress. But Axios by default does not support streaming in the browser. The onDownloadProgress
is the closest thing to streaming in the browser, but outside the browser, we can set a responseType
of stream
to use:
function streamLargeData() {\\n // Store the complete response as it comes in\\n let responseData = \\"\\";\\n axios\\n .get(\\n \\"https://overpass-api.de/api/interpreter?data=\\\\[out:json];way[highway\\\\](40.5,-74,41,-73.5);out qt 50000;\\",\\n {\\n responseType: \\"stream\\",\\n }\\n )\\n .then((response) => {\\n // Handle the data stream\\n response.data.on(\\"data\\", (chunk) => {\\n const chunkData = chunk.toString();\\n responseData += chunkData;\\n console.log(`Received chunk of size: ${chunk.length} bytes`);\\n });\\n\\n // Handle stream completion\\n response.data.on(\\"end\\", () => {\\n const parsedData = JSON.parse(responseData);\\n console.log(parsedData);\\n });\\n })\\n .catch((error) => console.error(error));\\n}\\n\\n
For fetch()
, we can use the ReadableStream
from the response, as we saw earlier in the Download Progress section.
One of the deciding factors for choosing between Axios and Fetch the Fetch API is the nature of your project. Here are some common things your project may require and the best choice for it:
\\nUse case | \\nBest choice | \\n
---|---|
Simple browser requests | \\nFetch API (native, lightweight) | \\n
Large-scale API calls | \\nAxios (streaming, interceptors, timeouts) | \\n
Built-in support in modern frameworks | \\nFetch API (default in browser and Node.js v18+) | \\n
Retry logic & fault tolerance | \\nAxios (throws for non 2xx errors, built-in timeouts) | \\n
File uploads/downloads | \\nAxios (built-in progress events) | \\n
Building a reusable HTTP utility | \\nAxios (flexibility, and built-in functionalities) | \\n
Authenticated requests & token management | \\nAxios (built-in interceptors) | \\n
Developers are big believers in, “if it isn’t broken, don’t fix it”. Devs who have been in big companies with thousands of lines of code wouldn’t want to change something that works fine.
\\nAs reflected in a Reddit discussion on Axios vs Fetch, this is how most developers feel about switching from Axios to Fetch:
\\nGET
request, whereas Fetch is stuck with the adherence to HTTP standards, even though there are a few developers demanding this feature since 2015fetch()
?Developers often prefer to stick with what already works without problems. The major drawback to Axios is its status as a dependency, and that it’s not causing problems for most developers. Hence why they’d rather stick to that.
\\nfetch()
better for performance than Axios?Yes. Fetch is a native browser API, which makes it lighter and more efficient. It has a smaller bundle size (even with a polyfill), runs directly in the browser or Node.js without additional abstractions, and supports streaming responses on the browser, unlike Axios.
\\nfetch()
fully replace Axios?No. Axios is just like jQuery in terms of how long it has been helpful to developers. Aside from that, its constant maintenance shows it will be here for a while. About a year ago, Axios added support for Fetch in version 1.7. This means you can choose to use the Fetch API instead of the default XMLHttpRequest
in Axios.
Cross-Origin Resource Sharing (CORS) is a mechanism available in HTTP to enable a server to permit the loading of its resources from any origins other than itself. For example, you need CORS when you want to pull data from external APIs that are public or authorized.
\\nIf the CORS mechanism is not properly enabled on the server, any request from a different server — regardless of whether or not it is made with Axios or fetch()
— will receive the No Access-Control-Header-Present
error.
To properly handle CORS, the first step is to configure the server, which depends on your environment/server. Once the server has been properly configured, it will automatically include the Access-Control-Allow-Origin
header in response to all requests (see the documentation for more information).
A common error, in both Axios and fetch()
, is to add the Access-Control-Allow-Origin
to the request — this is a response parameter and is used by the server to specify the permitted access control for the origin.
Another aspect to be aware of, when you add the headers to your Axios request, is that the request is handled differently. The browser performs a preflight request before the actual request. This preflight request is an OPTIONS
request that verifies if CORS is honored, and if the actual request is safe to send the real request.
Axios provides an easy-to-use API in a compact package for most HTTP communication needs. However, if you prefer to stick with native APIs, nothing is stopping you from implementing Axios features.
\\nAs discussed in this article, it’s possible to reproduce the key features of the Axios library using the fetch()
method provided by web browsers. Whether it’s worth loading a client HTTP API depends on whether you’re comfortable working with built-in APIs.
Would you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nGET
requests with Axios\\n POST
requests with Axios\\n PUT
and DELETE
requests in Axios\\n \\n async
and await
\\n Promise.all
to send multiple requests\\n Leveraging specialized tools for HTTP requests can make a difference in your day-to-day developer experience and productivity. In this tutorial, we’ll demonstrate how to make HTTP requests using Axios in JavaScript with clear examples, including how to make an Axios request with the four HTTP request methods, how to send multiple requests simultaneously with Promise.all
, and much more.
Axios is a simple, promise-based HTTP client for the browser and Node.js. It provides a consistent way to send asynchronous HTTP requests to the server, handle responses, and perform other network-related operations.
\\nOn the server side, Axios uses Node.js’ native http
module, while on the browser, it uses XMLHttpRequest
objects.
Editor’s note: This blog was updated by David Omotayo in April 2025 to provide a clear and concise general overview of Axios, include practical use cases for Axios, and answer frequently asked questions regarding the HTTP client.
\\nThe most common way for frontend programs to communicate with servers is through the HTTP protocol. You are probably familiar with the Fetch API and the XMLHttpRequest
interface, which allows you to fetch resources and make HTTP requests.
If you’re using a JavaScript library, chances are it comes with a client HTTP API. jQuery’s $.ajax()
function, for example, has been particularly popular with frontend developers. But as developers move away from such libraries in favor of native APIs, dedicated HTTP clients have emerged to fill the gap.
As with Fetch, Axios is promise-based. However, it provides a more powerful and flexible feature set.
\\nHere are a few reasons some developers favor Axios:
\\nYou can install Axios using the following command for npm, Yarn, and pnpm, respectively:
\\nnpm install axios\\nyarn add axios\\npnpm add axios\\n\\n
To install Axios using a content delivery network (CDN), run the following:
\\n<<script src=\\"https://unpkg.com/axios/dist/axios.min.js\\"></script>\\n\\n
After installing Axios, you can begin making HTTP requests in your application. This is as simple as importing the axios
function and passing a configuration (config) object to it:
import axios from \\"axios\\"\\n\\naxios({\\n method: \\"\\",\\n url: \\"\\",\\n data: \\"\\",\\n responseType: \\"\\",\\n headers: {...},\\n timeout: \\"\\",\\n responseType: \\"\\",\\n})\\n\\n
These properties, except for the url
and method
properties (which are required in most cases), are optional. If not specified in the configuration, Axios will automatically set default values.
The method
property accepts one of the four standard HTTP request methods: GET
, POST
, PUT
, and DELETE
, while the url
property accepts the URL of the service endpoint to fetch or send data from or to.
GET
requests with AxiosThe GET
request method is the most straightforward. It’s used to read or request data from a specified resource endpoint.
To make a GET
request using Axios, you need to provide the URL from which the data is to be read or fetched to the url
property, and the string \\"get\\"
to the method
property in the config object:
// send a GET request\\naxios({\\n method: \'get\',\\n url: \'api/items\'\\n});\\n\\n
This code will fetch a list of items from the URL endpoint if the request is successful.
\\nPOST
requests with AxiosA POST
request is used to send data, such as files or resources, to a server. You can make a POST
request using Axios by providing the URL of the service endpoint and an object containing the key-value pairs to be sent to the server.
For a basic Axios POST
request, the configuration object must include a url
property. If no method property is provided, GET
will be used as the default.
Let’s look at a simple Axios POST
example:
// send a POST request\\naxios({\\n method: \'post\',\\n url: \'api/login\',\\n data: {\\n firstName: \'Finn\',\\n lastName: \'Williams\'\\n }\\n});\\n\\n
This should look familiar to those who have worked with jQuery’s $.ajax
function. This code instructs Axios to send a POST
request to /login
with an object of key-value pairs as its data. Axios will automatically convert the data to JSON and send it as the request body.
Check out our article for more nuances on the POST request method in Axios.
\\nPUT
and DELETE
requests in AxiosThe PUT
and DELETE
request methods are similar to POST
in that they each send data to the server, albeit in a different way.
PUT
The PUT
request method is used to send data to a server to create or update a resource using the data provided in the request’s body.
To make a PUT
request using Axios, you need the resource’s Uniform Resource Identifier (URI
), a URL with a query parameter that precisely identifies the location of a resource on the server, and the data payload to be sent to the server:
// Send a PUT request\\nconst updatedItem = { name: \'Updated Item\', price: 150 };\\n\\n axios({\\n method: \'get\',\\n url: \'https://api.example.com/items/1\',\\n data: updatedItem\\n })\\n\\n
In this example, the updatedItem
data object will be sent to the location of a resource with an object Id
of 1
.
If the request is successful and an existing resource is found at the URI location, PUT
will replace it. If no existing resource is found, PUT
will create a new one using the provided payload data.
DELETE
The DELETE
request method also uses a URI’s query parameter to identify and remove a resource from the server.
To make a DELETE
request using Axios, you need to pass the string \\"delete\\"
to the method
property and provide a URL with a query parameter in the url
property:
// Send a DELETE request\\n axios({\\n method: \'delete\',\\n url: \'https://api.example.com/items/20\',\\n })\\n\\n
If the request is successful, the resource or file with an id
of 20
will be removed from the server.
Axios also provides a set of shorthand methods for performing different types of requests. The methods include:
\\naxios.request(config)
axios.get(url[, config])
axios.delete(url[, config])
axios.head(url[, config])
axios.options(url[, config])
axios.post(url[, data[, config]])
axios.put(url[, data[, config]])
axios.patch(url[, data[, config]])
For example, the following code shows how the previous POST
example could be written using the axios.post()
shorthand method:
axios.post(\'/login\', {\\n firstName: \'Finn\',\\n lastName: \'Williams\'\\n});\\n\\n
Once an HTTP request is made, Axios returns a promise that is either fulfilled or rejected, depending on the response from the backend service.
\\n\\nTo handle the result, you can use the then()
method, like this:
axios.post(\'/login\', {\\n firstName: \'Finn\',\\n lastName: \'Williams\'\\n})\\n.then((response) => {\\n console.log(response);\\n}, (error) => {\\n console.log(error);\\n});\\n\\n
If the promise is fulfilled, the first argument of then()
will be called; if the promise is rejected, the second argument will be called. According to the Axios documentation, the fulfillment value is an object containing the following properties:
{\\n // `data` is the response that was provided by the server\\n data: {},\\n\\n // `status` is the HTTP status code from the server response\\n status: 200,\\n\\n // `statusText` is the HTTP status message from the server response\\n statusText: \'OK\',\\n\\n // `headers` the headers that the server responded with\\n // All header names are lower cased\\n headers: {},\\n\\n // `config` is the config that was provided to `axios` for the request\\n config: {},\\n\\n // `request` is the request that generated this response\\n // It is the last ClientRequest instance in node.js (in redirects)\\n // and an XMLHttpRequest instance the browser\\n request: {}\\n}\\n\\n
As an example, here’s how the response looks when requesting data from the GitHub API:
\\naxios.get(\'https://api.github.com/users/mapbox\')\\n .then((response) => {\\n console.log(response.data);\\n console.log(response.status);\\n console.log(response.statusText);\\n console.log(response.headers);\\n console.log(response.config);\\n });\\n\\n// logs:\\n// => {login: \\"mapbox\\", id: 600935, node_id: \\"MDEyOk9yZ2FuaXphdGlvbjYwMDkzNQ==\\", avatar_url: \\"https://avatars1.githubusercontent.com/u/600935?v=4\\", gravatar_id: \\"\\", …}\\n// => 200\\n// => OK\\n// => {x-ratelimit-limit: \\"60\\", x-github-media-type: \\"github.v3\\", x-ratelimit-remaining: \\"60\\", last-modified: \\"Wed, 01 Aug 2018 02:50:03 GMT\\", etag: \\"W/\\"3062389570cc468e0b474db27046e8c9\\"\\", …}\\n// => {adapter: ƒ, transformRequest: {…}, transformResponse: {…}, timeout: 0, xsrfCookieName: \\"XSRF-TOKEN\\", …}\\n\\n
An HTTP request may succeed or fail. Therefore, it is important to handle errors on the client side and provide appropriate feedback for a better user experience.
\\nPossible causes of error in a network request may include server errors, authentication errors, missing parameters, and requesting non-existent resources.
\\nAxios, by default, rejects any response with a status code that falls outside the successful 2xx range. However, you can modify this feature to specify what range of HTTP codes should throw an error using the validateStatus
config option, like in the example below:
axios({\\n baseURL: \\"https://jsonplaceholder.typicode.com\\",\\n url: \\"/todos/1\\",\\n method: \\"get\\",\\n validateStatus: status => status <=500,\\n})\\n .then((response) => {\\n console.log(response.data);\\n })\\n\\n
The error object that Axios passes to the .catch
block has several properties, including the following:
.catch(error => {\\n console.log(error.name)\\n console.log(error.message)\\n console.log(error.code)\\n console.log(error.status)\\n console.log(error.stack)\\n console.log(error.config)\\n})\\n\\n
In addition to the properties highlighted above, if the request was made and the server responded with a status code that falls outside the 2xx range, the error object will also have the error.response
object.
On the other hand, if the request was made but no response was received, the error object will have an error.request
object. Depending on the environment, the error.request
object is an instance of XMLHttpRequest
in the browser environment and an instance of http.ClientRequest
in Node.
You need to check for error.response
and error.request
objects in your .catch
callback to determine the error you are dealing with so that you can take appropriate action:
axios.get(\\"https://jsonplaceholder.typicode.com/todos\\").catch(function (error) {\\n if (error.response) {\\n // Request was made. However, the status code of the server response falls outside the 2xx range\\n console.log(error.response.data);\\n console.log(error.response.status);\\n console.log(error.response.headers);\\n } else if (error.request) {\\n // Request was made but no response received\\n console.log(error.request);\\n } else {\\n // Error was triggered by something else\\n console.log(\\"Error\\", error.message);\\n }\\n console.log(error.config);\\n});\\n\\n
Sometimes, duplicating the code above in the .catch
callback for each request can become tedious and time-consuming. You can instead intercept the error and handle it globally like so:
axios.interceptors.request.use(null, function (error) {\\n // Do something with request error\\n return Promise.reject(error);\\n});\\n\\naxios.interceptors.response.use(null, function (error) {\\n // Do something with response error\\n if (error.response) {\\n // Request was made. However, the status code of the server response falls outside the 2xx range\\n } else if (error.request) {\\n // Request was made but no response received\\n } else {\\n // Error was triggered by something else\\n }\\n return Promise.reject(error);\\n});\\n\\n
A more granular, centralized error-handling approach is maintaining the API globally and managing all response and request errors with a dedicated handler function.
\\nLet’s understand it with a simple React app that shows toast messages when a request or response error occurs. Start by creating a file called api.js
in the src
directory of your React app and use the axios.create
method to create a custom Axios instance.
In this example, I’m using a placeholder API to demonstrate and use one of its endpoints as the base URL of our Axios instance:
\\n// src/api.js\\nimport axios from \'axios\';\\nimport toast from \'react-hot-toast\';\\n\\n// Create a custom Axios instance\\nconst api = axios.create({\\n baseURL: \'https://jsonplaceholder.typicode.com\',\\n});\\n\\n
Next, let’s define a handler function and call it handleError
in the same file. This function takes one argument — expected to be the error object when we implement this with Axios interceptors. With this error object, we can categorize errors based on their type (e.g., response, request, setup) and display appropriate user feedback using a React toast library:
// src/api.js\\n// Previous code... \\n\\n// Centralized error handling\\nconst handleError = (error) => {\\n /*\\n * If request was made, but the status code \\n * of the server response falls outside \\n * the 2xx range.\\n */\\n if (error.response) {\\n\\n // A lookup table of different error messages\\n const messages = {\\n 404: \'Resource not found\',\\n 500: \'Server error. Please try again later.\',\\n };\\n\\n const errorMessage = \\n messages[error.response.status] || `Unexpected error: ${error.response.status}`;\\n\\n toast.error(errorMessage, { id: \'api-error\' });\\n console.error(\'Full error:\', error);\\n return;\\n }\\n\\n // If request was made but no response received\\n if (error.request) {\\n toast.error(\'No response from server. Check your network connection.\', { id: \'api-error\' });\\n console.error(\'Full error:\', error);\\n return;\\n }\\n\\n // If error was triggered by something else\\n toast.error(\'Error setting up the request\', { id: \'api-error\' });\\n console.error(\'Full error:\', error);\\n};\\n\\n
Now, we can add a response interceptor to our custom Axios instance to provide automatic success notifications for successful API responses and delegate error handling to the handleError
function:
// src/api.js\\n// Previous code... \\n\\n// Axios interceptor\\napi.interceptors.response.use(\\n (response) => {\\n const successMessage =\\n response.config.successMessage ||\\n `${response.config.method.toUpperCase()} request successful`;\\n\\n toast.success(successMessage, {\\n id: \'api-success\',\\n });\\n\\n return response;\\n },\\n (error) => {\\n handleError(error);\\n return Promise.reject(error);\\n }\\n);\\n\\nexport default api;\\n\\n
We can then use this custom Axios instance in a component where we want to consume the API (the placeholder API in this case) and let it handle errors by itself. Here’s the complete setup of our React app with HTTP error feedback following a centralized error-handling approach.
\\nasync
and await
The async
and await
syntax is syntactic sugar around the Promise API. It helps you write cleaner, more readable, and maintainable code. With async
and await
, your codebase feels synchronous and easier to think about.
When using async
and await
, you invoke axios
or one of its request methods inside an asynchronous function, like in the example below:
const fetchData = async () => {\\n try {\\n const response = await axios.get(\\"https://api.github.com/users/mapbox\\");\\n console.log(response.data);\\n console.log(response.status);\\n console.log(response.statusText);\\n console.log(response.headers);\\n console.log(response.config);\\n } catch (error) {\\n // Handle error\\n console.error(error);\\n }\\n};\\n\\nfetchData();\\n\\n
When using the async
and await
syntax, it’s standard practice to wrap your code in a try...catch
block. Doing so will ensure you appropriately handle errors and provide feedback for a better user experience.
Promise.all
to send multiple requestsYou can use Axios with Promise.all
to make multiple requests in parallel by passing an iterable of promises to it. The Promise.all
static method returns a single promise object that fulfills only when all input promises have been fulfilled.
Here’s a simple example of how to use Promise.all
to make simultaneous HTTP requests:
// execute simultaneous requests \\nPromise.all([\\n axios.get(\\"https://api.github.com/users/mapbox\\"),\\n axios.get(\\"https://api.github.com/users/phantomjs\\"),\\n]).then(([user1, user2]) => {\\n //this will be executed only when all requests are complete\\n console.log(\\"Date created: \\", user1.data.created_at);\\n console.log(\\"Date created: \\", user2.data.created_at);\\n});\\n\\n// logs:\\n// => Date created: 2011-02-04T19:02:13Z\\n// => Date created: 2017-04-03T17:25:46Z\\n\\n
This code makes two requests to the GitHub API and then logs the value of the created_at
property of each response to the console. Keep in mind that if any of the input promises are rejected, the entire promise will immediately be rejected, returning the error from the first promise that encountered a rejection.
Sending custom headers with Axios is straightforward. Simply pass an object containing the headers as the last argument. For example:
\\nconst options = {\\n headers: {\'X-Custom-Header\': \'value\'}\\n};\\n\\naxios.post(\'/save\', { a: 10 }, options);\\n\\n
When making a network request to a server, it is not uncommon to experience delays when the server takes too long to respond. It is standard practice to timeout an operation and provide an appropriate error message if a response takes too long. This ensures a better user experience when the server is experiencing downtime or a higher load than usual.
\\nWith Axios, you can use the timeout
property of your config
object to set the waiting time before timing out a network request. Its value is the waiting duration in milliseconds. The request is aborted if Axios doesn’t receive a response within the timeout duration. The default value of the timeout
property is 0
milliseconds (no timeout).
You can check for the ECONNABORTED
error code and take appropriate action when the request times out:
axios({\\n baseURL: \\"https://jsonplaceholder.typicode.com\\",\\n url: \\"/todos/1\\",\\n method: \\"get\\",\\n timeout: 2000,\\n})\\n .then((response) => {\\n console.log(response.data);\\n })\\n .catch((error) => {\\n if (error.code === \\"ECONNABORTED\\") {\\n console.log(\\"Request timed out\\");\\n } else {\\n console.log(error.message);\\n }\\n });\\n\\n
You can also timeout a network request using the AbortSignal.timeout
static method. It takes the timeout as an argument in milliseconds and returns an AbortSignal
instance. You need to set it as the value of the signal
property.
The network request aborts when the timeout expires. Axios sets the value of error.code
to ERR_CANCELED
and error.message
to canceled
:
const abortSignal = AbortSignal.timeout(200);\\n\\naxios({\\n baseURL: \\"https://jsonplaceholder.typicode.com\\",\\n url: \\"/todos/1\\",\\n method: \\"get\\",\\n signal: abortSignal,\\n})\\n .then((response) => {\\n console.log(response.data);\\n })\\n .catch((error) => {\\n if (error.code === \\"ERR_CANCELED\\" && abortSignal.aborted) {\\n console.log(\\"Request timed out\\");\\n } else {\\n console.log(error.message);\\n }\\n });\\n\\n
Axios automatically serializes JavaScript objects to JSON when making a POST
or PUT
request. This eliminates the need to serialize the request bodies to JSON.
Axios also sets the Content-Type
header to application/json
. This enables web frameworks to automatically parse the data:
// A sample JavaScript object to be sent using Axios\\nconst data = {\\n name: \'Jane\',\\n age: 30\\n};\\n\\n// The `data` object will be automatically converted to JSON\\naxios.post(\'/api/users\', data);\\n\\n
If you want to send a pre-serialized JSON string using a POST
or PUT
request, you’ll need to make sure the Content-Type
header is set:
// A pre-serialized JSON string\\nconst jsonData = JSON.stringify({\\n name: \'John\',\\n age: 33\\n});\\n\\n// Need to manually set Content-Type here\\naxios.post(\'/api/users\', jsonData, {\\n headers: {\\n \'Content-Type\': \'application/json\'\\n }\\n});\\n\\n
Although Axios automatically converts requests and responses to JSON by default, it also allows you to override the default behavior and define a different transformation mechanism. This is particularly useful when working with an API that accepts only a specific data format, such as XML or CSV.
\\nTo change request data before sending it to the server, set the transformRequest
property in the config object.
Note that this method only works for PU``T
, POST
, DELETE
, and PATCH
request methods.
Here’s an example of how to use transformRequest
in Axios to transform JSON data into XML data and post it:
const options = {\\n method: \'post\',\\n url: \'/login\',\\n data: {\\n firstName: \'Finn\',\\n lastName: \'Williams\'\\n },\\n transformRequest: [(data, headers) => {\\n // Convert to XML\\n const xmlData = `\\n <?xml version=\\"1.0\\" encoding=\\"UTF-8\\"?>\\n <user>\\n <firstName>${data.firstName}</firstName>\\n <lastName>${data.lastName}</lastName>\\n </user>\\n `;\\n\\n // Set the Content-Type header to XML\\n headers[\'Content-Type\'] = \'application/xml\';\\n\\n return xmlData;\\n }]\\n};\\n\\n// send the request\\naxios(options);\\n\\n
To modify the data before passing it to then()
or catch()
, you can set the transformResponse
property. Leveraging both the transformRequest
and transformResponse
, here’s an example that transforms JSON data to CSV, posts it, and then turns the received response into JSON to use on the client:
const options = {\\n method: \'post\',\\n url: \'/login\',\\n data: {\\n firstName: \'Finn\',\\n lastName: \'Williams\'\\n },\\n transformRequest: [(data, headers) => {\\n // Convert to CSV\\n const csvData = `firstName,lastName\\\\n${data.firstName},${data.lastName}`;\\n\\n // Set the Content-Type header to CSV\\n headers[\'Content-Type\'] = \'text/csv\';\\n\\n return csvData;\\n }],\\n transformResponse: [(data) => {\\n // If server responds with CSV, parse it\\n const rows = data.split(\'\\\\n\');\\n const headers = rows[0].split(\',\');\\n const values = rows[1].split(\',\');\\n\\n return {\\n [headers[0]]: values[0],\\n [headers[1]]: values[1]\\n };\\n }]\\n};\\n\\n// send the request\\naxios(options);\\n\\n
HTTP interception is a popular feature of Axios. With this feature, you can examine and change HTTP requests from your program to the server and vice versa, which is very useful for a variety of implicit tasks, such as logging and authentication.
\\nAxios interceptors are functions that can be executed before a request is sent or after a response is received through Axios. There are two types of interceptor methods in Axios: request and response.
\\nAt first glance, interceptors look very much like transforms, but they differ in one key way: unlike transforms, which only receive the data and headers as arguments, interceptors receive the entire response object or request config.
\\nYou can declare a request interceptor in Axios like this:
\\n// declare a request interceptor\\naxios.interceptors.request.use(config => {\\n // perform a task before the request is sent\\n console.log(\'Request was sent\');\\n\\n return config;\\n}, error => {\\n // handle the error\\n return Promise.reject(error);\\n});\\n\\n// sent a GET request\\naxios.get(\'https://api.github.com/users/mapbox\')\\n .then(response => {\\n console.log(response.data.created_at);\\n });\\n\\n
This code logs a message to the console whenever a request is sent and then waits until it gets a response from the server, at which point it prints the time the account was created at GitHub to the console. One advantage of using interceptors is that you no longer have to implement tasks for each HTTP request separately.
\\nAxios also provides a response interceptor, which allows you to transform the responses from a server on their way back to the application. For example, here’s how to catch errors in an interceptor with Axios:
\\n// declare a response interceptor\\naxios.interceptors.response.use((response) => {\\n // do something with the response data\\n console.log(\'Response was received\');\\n\\n return response;\\n}, error => {\\n // handle the response error\\n return Promise.reject(error);\\n});\\n\\n// sent a GET request\\naxios.get(\'https://api.github.com/users/mapbox\')\\n .then(response => {\\n console.log(response.data.created_at);\\n });\\n\\n
The use cases for Axios go beyond simply posting or fetching data from a server.
\\nIn real-world settings, data is often protected and requires authentication before access is granted, request progress may need to be monitored, and some requests might need to be canceled if they become redundant. These are all real-world use cases for which Axios in JavaScript provides simplified solutions.
\\nAccessing protected data with Axios requires more than just providing a URL. The goal is to securely include credentials with your HTTP requests to authenticate with the server before being granted access to post or fetch data to and from the server.
\\nTypically, you include either a JSON Web Token (JWT
) via the Authorization
header or base64-encoded credentials, depending on the authentication method, to make a protected request.
This is the most common authentication method in modern APIs. This method involves obtaining a token (JSON Web Token – JWT
) after a successful login and including this token in the Authorization
header of subsequent requests.
You can make protected requests with Axios using the bearer token method by adding a headers
option to the config object and passing the token to the Authorization
property:
axios.get(\'/api/protected-resource\', {\\n headers: {\\n Authorization: `Bearer ${Token}`\\n }\\n});\\n\\n
This authentication method uses a similar approach to the bearer token method, but it sends a base64-encoded username and password instead of a token via the Authorization
header in every request:
const credential = `${USERNAME}:${PASSWORD}`;\\nconst token = Buffer.from(credential).toString(\'base64\');\\n\\naxios.get(\'/api/protected-resource\', {\\n headers: {\\n Authorization: `Basic ${token}`\\n }\\n});\\n\\n
While this works, it requires you to manually encode the credentials.
\\nAxios has built-in support for basic auth, so it automatically encodes the USERNAME
and PASSWORD
credentials. This way, all you have to do to invoke APIs protected with basic auth is add an auth
property to the config object with an object value containing the credentials:
axios.get(\'/api/protected-resource\', {\\n auth: {\\n username: USERNAME,\\n password: PASSWORD \\n }\\n});\\n\\n
Another interesting feature of Axios is the ability to monitor request progress. This is especially useful when downloading or uploading large files. The example provided in the Axios documentation gives you a good idea of how that can be done. But for the sake of simplicity and style, we are going to use the Axios Progress Bar module in this tutorial.
\\nThe first thing we need to do to use this module is to include the related style and script:
\\n<link rel=\\"stylesheet\\" type=\\"text/css\\" href=\\"https://cdn.rawgit.com/rikmms/progress-bar-4-axios/0a3acf92/dist/nprogress.css\\" />\\n\\n<script src=\\"https://cdn.rawgit.com/rikmms/progress-bar-4-axios/0a3acf92/dist/index.js\\"></script>\\n\\n
Then we can implement the progress bar like this:
\\nloadProgressBar()\\n\\nconst url = \'https://media.giphy.com/media/C6JQPEUsZUyVq/giphy.gif\';\\n\\nfunction downloadFile(url) {\\n axios.get(url)\\n .then(response => {\\n console.log(response)\\n })\\n .catch(error => {\\n console.log(error)\\n })\\n}\\n\\ndownloadFile(url);\\n\\n
To change the default styling of the progress bar, we can override the following style rules:
\\n#nprogress .bar {\\n background: red !important;\\n}\\n\\n#nprogress .peg {\\n box-shadow: 0 0 10px red, 0 0 5px red !important;\\n}\\n\\n#nprogress .spinner-icon {\\n border-top-color: red !important;\\n border-left-color: red !important;\\n}\\n\\n
In some situations, you may no longer care about the result and want to cancel a request that’s already been sent. This can be done by using AbortController
. You can create an AbortController
instance and set its corresponding AbortSignal
instance as the value of the signal
property of the config object.
Here’s a simple example:
\\nconst controller = new AbortController();\\n\\naxios\\n .get(\\"https://media.giphy.com/media/C6JQPEUsZUyVq/giphy.gif\\", {\\n signal: controller.signal,\\n })\\n .catch((error) => {\\n if (controller.signal.aborted) {\\n console.log(controller.signal.reason);\\n } else {\\n // handle error\\n }\\n });\\n\\n// cancel the request (the reason parameter is optional)\\ncontroller.abort(\\"Request canceled.\\");\\n\\n
Axios also has a built-in function for canceling requests. However, the built-in CancelToken
functionality is deprecated. You may still encounter it in a legacy codebase, but it is not advisable to use it in new projects.
Below is a basic example:
\\nconst source = axios.CancelToken.source();\\n\\naxios.get(\'https://media.giphy.com/media/C6JQPEUsZUyVq/giphy.gif\', {\\n cancelToken: source.token\\n}).catch(thrown => {\\n if (axios.isCancel(thrown)) {\\n console.log(thrown.message);\\n } else {\\n // handle error\\n }\\n});\\n\\n// cancel the request (the message parameter is optional)\\nsource.cancel(\'Request canceled.\');\\n\\n
You can also create a cancel token by passing an executor function to the CancelToken
constructor, as shown below:
const CancelToken = axios.CancelToken;\\nlet cancel;\\n\\naxios.get(\'https://media.giphy.com/media/C6JQPEUsZUyVq/giphy.gif\', {\\n // specify a cancel token\\n cancelToken: new CancelToken(c => {\\n // this function will receive a cancel function as a parameter\\n cancel = c;\\n })\\n}).catch(thrown => {\\n if (axios.isCancel(thrown)) {\\n console.log(thrown.message);\\n } else {\\n // handle error\\n }\\n});\\n\\n// cancel the request\\ncancel(\'Request canceled.\');\\n\\n
We can use Axios with the FormData
object to streamline a file upload. To simplify the demonstration, I’m using React again to create a file upload component with basic error handling.
N.B., we are assuming that a backend API is available to support the file upload in this example. This will make more sense with a test backend API in your local development setup.
\\nLet’s use React’s useState
Hook to manage the file selection and its upload status. Let’s also create a handler function (handleFileChange
) to manage the file selection, which basically updates the selectedFile
state to the file chosen by the user:
import { useState } from \'react\';\\nimport axios from \'axios\';\\n\\nconst FileUploadBox = () => {\\n // State to manage the selected file and upload status\\n const [selectedFile, setSelectedFile] = useState(null);\\n const [uploadStatus, setUploadStatus] = useState(\'\');\\n\\n // Handle file selection\\n const handleFileChange = (event) => {\\n const file = event.target.files[0];\\n setSelectedFile(file);\\n };\\n}\\n\\n
We should now define a handler function (handleFileUpload
) for file upload, which creates a FormData
object if a file is selected. The selected file is then appended to this object, which will be sent in an Axios POST
request. Uploading a file is a heavy operation. Therefore, this handler function should execute asynchronously to allow other operations to continue without blocking the UI thread.
N.B., if your use case allows, you may also use an Axios PUT
request to upload a file, which takes a similar approach but may also require you to add some additional steps:
const FileUploadBox = () => {\\n // Previous code...\\n\\n // Handle file upload\\n const handleFileUpload = async () => {\\n // Ensure a file is selected\\n if (!selectedFile) {\\n setUploadStatus(\'Please select a file first\');\\n return;\\n }\\n\\n // Create a FormData object to send the file\\n const formData = new FormData();\\n formData.append(\'file\', selectedFile);\\n }\\n}\\n\\n
To the same function, i.e., handleFileUpload
, we can add a try...catch
block with a custom Axios instance pointing to our backend API’s endpoint, which is responsible for the file upload. Because it is a file upload, we must set the Content-Type
to multipart/form-data
to have our file properly parsed at the backend.
We may also reflect the upload progress in the frontend using the onUploadProgress
property of our custom Axios instance. If the request is successful, we set the uploadStatus
to something positive, which we can also show through a toast message later. Otherwise, we set a negative message to the uploadStatus
state:
const FileUploadBox = () => {\\n // Previous code...\\n\\n // Handle file upload\\n const handleFileUpload = async () => {\\n try {\\n // Send POST request using Axios\\n const response = await axios.post(\'/api/upload\', formData, {\\n headers: {\\n \'Content-Type\': \'multipart/form-data\'\\n },\\n // Optional: track upload progress\\n onUploadProgress: (progressEvent) => {\\n const percentCompleted = Math.round(\\n (progressEvent.loaded * 100) / progressEvent.total\\n );\\n console.log(`Upload Progress: ${percentCompleted}%`);\\n }\\n });\\n\\n // Handle successful upload\\n setUploadStatus(\'File uploaded successfully!\');\\n console.log(\'Upload response:\', response.data);\\n } catch (error) {\\n // Handle upload error\\n setUploadStatus(\'File upload failed\');\\n console.error(\'Upload error:\', error);\\n }\\n };\\n}\\n\\n
Finally, we should add some JSX to structure our file upload box and use the states, selection handlers, and file upload handlers appropriately, as shown below:
\\nconst FileUploadBox = () => {\\n // Previous code...\\n\\n return (\\n <div className=\\"upload-box-container\\">\\n <h2>File Upload</h2>\\n\\n <input \\n type=\\"file\\" \\n onChange={handleFileChange}\\n />\\n\\n <button \\n onClick={handleFileUpload}\\n disabled={!selectedFile}\\n >\\n Upload File\\n </button>\\n\\n {uploadStatus && (\\n <p>\\n {uploadStatus}\\n </p>\\n )}\\n </div>\\n );\\n};\\n\\nexport default FileUploadComponent;\\n\\n
As an assignment, you may try adding previously discussed Axios interceptors-based error handling to this example. Find the code for this example in this StackBlitz demo.
\\nAxios’ rise in popularity among developers has resulted in a rich selection of third-party libraries that extend its functionality. From testers to loggers, there’s a library for almost any additional feature you may need when using Axios. Here are some libraries that are currently available:
\\nAxios is an open-source HTTP library for JavaScript that lets developers make HTTP requests from both the browser and Node.js.
\\nTo make a GET request with Axios, import the library, then call the axios.get(url)
with your API endpoint. Handle the response using .then()
for success or .catch()
for errors.
Axios offers features like automatic JSON data transformation, interceptors, and better error handling, which makes it convenient for complex applications.
\\nFor a detailed comparison of both tools, refer to our article, Axios vs. Fetch.
\\nAxios interceptors are functions that intercept and handle HTTP requests and responses. These functions act as middlewares that can be used to transform and modify requests before they are sent, or manipulate responses before you pass them to your functions.
\\napplication/x-www-form-urlencoded
?Axios automatically serializes objects to JSON and simultaneously sets the headers for you.
\\nThere’s a good reason Axios is so popular among developers; it’s packed with useful features. In this post, we took a look at several key features of Axios and learned how to use them in practice. But there are still many aspects of Axios that we haven’t discussed. Be sure to check out the Axios GitHub page to learn more.
\\nDo you have any tips on using Axios? Let us know in the comments!
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nLink
component\\n Link
component\\n Link
\\n Link
with custom elements\\n Link
component\\n In Next.js, the Link
component plays a key role in client-side navigation, enabling fast transitions between pages without a full page reload. Link
is a React component that extends the standard HTML link <a>
element. This means that it works like the HTML element but with prefetching and optimization features.
In web development, efficient navigation is important in any application, as it impacts user experience, performance, and SEO. In this article, we will be looking to understand how the Link
component works in Next, from basic to advanced implementations.
Link
componentThe primary advantage of the Link
component is its ability to handle client-side navigation. Instead of reloading the entire page, Next.js only updates the content that has changed, resulting in faster page transitions and better user experience.
The Link
component is used to navigate between pages without triggering a full page reload. This makes navigation feel very fast, or even instant.
To use it, import it from next/link
and use it as a wrapper. Its usage is similar to the standard HTML anchor tag/element:
import Link from \\"next/link\\";\\n\\nfunction page() {\\n return (\\n <div>\\n <h1>Page Navigation with Next.js Link</h1>\\n <Link href=\\"/about\\">\\n About Page\\n </Link>\\n </div>\\n );\\n}\\nexport default page;\\n\\n
In the code example above, clicking the link navigates you to the About page while preserving the application state. Similar to the anchor tag, the Link
component has a href
prop that indicates the path or URL you want to navigate to.
Link
componentThe Link
component can be used for a few key purposes:
The primary feature and use case of the Link
component is its ability to handle client-side navigation, i.e. navigating to a different page without reloading the entire page.
You can perform different client-side navigations like linking to internal pages:
\\n<Link href=\\"/\\">About Page</Link>\\n\\n
You can also link to nested pages:
\\n<Link href=\\"/blog/understanding-nextjs-link-component\\">About Page</Link>\\n\\n
Similarly, you can do client-side navigation with dynamic routes, where [slug]
represents each specific blog route:
<Link href=\\"/blog/[slug]\\">About Page</Link>\\n\\n
One of the main features or capabilities of the Next.js Link
component is preloading and prefetching. By default, Next.js automatically prefetches and preloads linked pages in the background when they appear in the viewport:
<Link href=\\"/about\\">About Page</Link>\\n\\n
In this case, prefetching means that the moment the /about
page Link
becomes visible, Next.js starts preloading the /about
page.
When you click the Link
to navigate to the About page, it loads up the page since it has already fetched it behind the scenes. This improves perceived performance since the loading time feels faster.
There are some cases where you may want to disable the prefetching feature. Although prefetching improves performance, it can also consume bandwidth. You can likely disable prefetching for links that are unlikely or rare to be clicked.
\\nTo disable the prefetching feature, set prefetch
prop to false (only accepts a Boolean and null):
<Link href=\\"/about\\" prefetch={false}>\\n About Page\\n</Link>\\n\\n
By default, prefetch is enabled. Additionally, prefetching is only enabled in production.
\\nLink
There are a few additional — and sometimes more advanced — use cases for Next Link
as well:
replace
for state managementWhen you navigate to a new page, Next.js pushes a new entry into the browser’s history. However, if you want to replace the current entry, you can do so using the replace
prop:
<Link href=\\"/about\\" replace>About Page</Link>\\n\\n
This prevents adding or pushing a new entry to the browser’s history stack, therefore, preventing the user from navigating back to the previous page using the browser’s back button.
\\n\\nA good scenario for this use case would be to avoid form resubmission. When a user submits a form and is taken to a success page, you don’t want the user to go back to resubmit the form.
\\nLink
with custom elementsIf your Link
component wraps a custom element like a button or <a>
element, it’s important to pass the prop passHref
to the Link
component so that the element contains the href
attribute.
This could be useful, for example, if you’re using a library:
\\nimport Link from \\"next/link\\";\\n\\nfunction page() {\\n return (\\n <div>\\n <h1>Page Navigation with Next.js Link</h1>\\n <Link href=\\"/dashboard\\" passHref>\\n <button onClick={(e) => console.log(e)}>Dashboard Page</button>\\n </Link>\\n </div>\\n );\\n}\\nexport default page;\\n\\n
Link
Dynamic routes are routes that constantly change based on URL parameters. They help when you want to create links or routes but do not know the exact segment names ahead of time. This means that a dynamic route segment like /blog/[id]
could lead to multiple blog routes where id
represents the routes of each blog.
When working with dynamic routes, the Next.js Link
component can handle complex URL structures. For example, if you have a dynamic route like /posts/[postId]
, you can use the Link
component to navigate to specific posts:
// Dynamic route with single parameter\\n<Link href={`/posts/${postId}`}>\\n View Post\\n</Link>\\n\\n// Dynamic route with multiple parameters\\n<Link \\n href={{\\n pathname: \'/posts/[category]/[id]\',\\n query: { category: \'tech\', id: \'123\' },\\n }}\\n>\\n Read Article\\n</Link>\\n\\n
Link
The Link
component provides built-in scroll management with customizable options. By default, Next.js automatically scrolls to the top of the page after navigation. However, you can disable this behavior by setting the scroll
prop to false. This will keep the scroll position for the new page visited:
<Link href=\\"/terms\\" scroll={false}>\\n Dashboard\\n</Link>\\n\\n
You can also configure the scroll
behavior using the scroll
prop options:
<Link\\n href=\\"/about\\"\\n scroll={(ele) => ele.scrollIntoView({ behavior: \\"smooth\\" })}\\n >\\n About Page\\n</Link>\\n\\n
Link
attributesSimilar to the standard HTML anchor tag or element, you can add custom attributes to a Next.js Link
component. Attributes like target
, aria-label
, aria-checked
, rel
, can all be added:
import Link from \\"next/link\\";\\n\\nfunction page() {\\n return (\\n <div>\\n <h1>Page Navigation with Next.js Link</h1>\\n <Link\\n href=\\"/dashboard\\"\\n target=\\"_blank\\"\\n aria-checked\\n rel=\\"noopener noreferrer\\"\\n aria-label=\\"Go to Dashboard\\"\\n >\\n Dashboard\\n </Link>\\n </div>\\n );\\n}\\nexport default page;\\n\\n
For accessibility and SEO purposes, you can use the aria
props. You can also use the target
prop to choose to open up external links in a new tab or not.
Next has a Hook called usePathname()
that allows you to get and read the current link or URL. With this Hook, you can style your active and inactive links using CSS:
\\"use client\\";\\nimport { usePathname } from \\"next/navigation\\";\\nimport Link from \\"next/link\\";\\nfunction page() {\\n const pathname = usePathname();\\n return (\\n <nav>\\n <h1>Page Navigation with Next.js Link</h1>\\n <Link className={`link ${pathname === \\"/\\" ? \\"active\\" : \\"\\"}`} href=\\"/\\">\\n Home\\n </Link>\\n <Link\\n className={`link ${pathname === \\"/dashboard\\" ? \\"active\\" : \\"\\"}`}\\n href=\\"/dashboard\\"\\n >\\n Dashboard\\n </Link>\\n </nav>\\n );\\n}\\nexport default page;\\n\\n
Link
componentLet’s look at some of the best practices to follow when it comes to using the Link
component:
If links are rarely or seldom visited, disable the prefetch
feature. This helps avoid performance issues, especially when you have a complex and large application.
Use descriptive href
URLs for better SEO optimization.
This is very important for various reasons, ranging from SEO to inclusivity. Always include descriptive labels and texts for each link context. ARIA attributes exist for this purpose.
\\nnav
linksFor a better user experience, you should always highlight active nav
links. Use CSS stylings to style your active and inactive nav
links so users know which page is highlighted or visited.
<a>
for external linkingAlthough the Next Link
works for external linking too, always use the standard <a>
tag for this purpose, as it better suits the behavior.
The Next.js Link
component is a great way to navigate between pages, as it’s more optimized and better suited for your Next.js applications. With features like automatic prefetching and seamless integration with dynamic routing, Link
helps you create a fast and responsive web application. Whether you’re building a simple static app, blog, or complex web app, Next Link
delivers across use cases.
Finally, for your external links, you should use the standard <a>
tag or element to ensure proper behavior.
Hey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nCreating a server with TypeScript using Node.js and Express is a good alternative to using JavaScript because it makes it easier to manage complex applications. It also helps when you need to collaborate with a distributed team of developers.
\\nTypeScript offers benefits like:
\\nAll of these benefits make TypeScript a great choice for a smoother development experience, especially in evolving projects.
\\nIn this article, we’ll explore a beginner-friendly way to configure TypeScript in an Express app, and gain an understanding of the fundamental constraints that accompany it. To follow along, you should have:
\\nCheck out the GitHub repository for the source code; the main branch has the TypeScript project, and the JavaScript branch has the JavaScript version.
\\nEditor’s note: This article was updated by Muhammed Ali in March 2025 to expand coverage of linting with ESLint + Prettier, add information on watchers (e.g., tsc –watch, nodemon), and provide deeper sample code, including demonstrating a small CRUD API.
\\n“Express TypeScript” refers to using the Express framework within a TypeScript project. It involves writing your Express server code in TypeScript, leveraging type definitions (often provided via @types/express
) to enable type checking, auto-completion, and better documentation. Essentially, it’s about combining Express’s flexibility with TypeScript’s safety and developer tooling benefits.
TypeScript is a great companion for Express because it provides static typing, which can catch potential bugs during development. With TypeScript, you can define interfaces for requests, responses, and even middleware, making your Express code more predictable and maintainable. This leads to improved developer productivity and more robust applications.
\\nThis article provides a comprehensive guide on setting up a Node.js and Express project with TypeScript, covering essential steps such as initializing the project, configuring TypeScript, structuring the project, and implementing typed environment variables.
\\nIt will also detail how to set up a basic CRUD API, including creating controllers, routes, and error handling middleware. Additionally, the guide includes instructions for linting with ESLint and Prettier, automating development with nodemon, and running the project in watch mode.
\\nThe goal is to demonstrate best practices for building a robust, type-safe Express application using TypeScript. Let’s get started:
\\nStart with the following:
\\nmkdir ts-node-express && cd ts-node-express\\nnpm init -y\\n\\n
Then install dependencies:
\\nnpm install express dotenv npm install -D typescript ts-node @types/node @types/express nodemon eslint prettier\\n
The DotEnv package is used to read environment variables from a .env
file.
The -D
, or --dev
, flag directs the package manager to install these libraries as development dependencies.
ts-node
— Enables running TypeScript files directly without pre-compiling to JavaScript@types/node
— Provides TypeScript type definitions for Node.js core modules@types/express
— Adds TypeScript type definitions for the Express frameworknodemon
— Automatically restarts the server when file changes are detected during developmenteslint
— Lints the code to catch errors and enforce coding standardsprettier
— Formats the code to ensure consistent style across the projectInstalling these packages will add a new devDependencies
object to the package.json
file, featuring version details for each package, as shown below:
{\\n...\\n \\"devDependencies\\": {\\n \\"@types/express\\": \\"^5.0.1\\",\\n \\"@types/node\\": \\"^22.13.11\\",\\n \\"eslint\\": \\"^9.22.0\\",\\n \\"nodemon\\": \\"^3.1.9\\",\\n \\"prettier\\": \\"^3.5.3\\",\\n \\"ts-node\\": \\"^10.9.2\\",\\n \\"typescript\\": \\"^5.8.2\\"\\n }\\n}\\n\\n
Every TypeScript project utilizes a configuration file to manage various project settings. The tsconfig.json
file, which serves as the TypeScript configuration file, outlines these default options and offers the flexibility to modify or customize compiler settings to suit your needs.
The tsconfig.json
file is usually placed at the project’s root. To generate this file, use the following tsc
command, initiating the TypeScript compiler:
npx tsc --init\\n\\n
Once you execute this command, you’ll notice the tsconfig.json file is created at the root of your project directory. This file contains the default compiler options, as depicted in the image below:
\\n{\\n \\"compilerOptions\\": {\\n \\"target\\": \\"ES2020\\",\\n \\"module\\": \\"commonjs\\",\\n \\"outDir\\": \\"./dist\\",\\n \\"rootDir\\": \\"./src\\",\\n \\"strict\\": true,\\n \\"esModuleInterop\\": true,\\n \\"skipLibCheck\\": true,\\n \\"forceConsistentCasingInFileNames\\": true\\n },\\n \\"include\\": [\\"src/**/*\\"],\\n \\"exclude\\": [\\"node_modules\\"]\\n}\\n\\n
Develop this project structure:
\\nts-node-express/\\n├── src/\\n│ ├── config/\\n│ │ └── config.ts // Load and type environment variables\\n│ ├── controllers/\\n│ │ └── itemController.ts // CRUD logic for \\"items\\"\\n│ ├── middlewares/\\n│ │ └── errorHandler.ts // Global typed error handling middleware\\n│ ├── models/\\n│ │ └── item.ts // Define item type and in-memory storage\\n│ ├── routes/\\n│ │ └── itemRoutes.ts // Express routes for items\\n│ ├── app.ts // Express app configuration (middlewares, routes)\\n│ └── server.ts // Start the server\\n├── .env // Environment variables\\n├── package.json // Project scripts, dependencies, etc.\\n├── tsconfig.json // TypeScript configuration\\n├── .eslintrc.js // ESLint configuration\\n└── .prettierrc // Prettier configuration\\n\\n
File: src/config/config.ts
:
import dotenv from \'dotenv\';\\n\\ndotenv.config();\\n\\ninterface Config {\\n port: number;\\n nodeEnv: string;\\n}\\n\\nconst config: Config = {\\n port: Number(process.env.PORT) || 3000,\\n nodeEnv: process.env.NODE_ENV || \'development\',\\n};\\n\\nexport default config;\\n\\n
This file loads your environment variables from a .env
file and provides type checking.
File: .env
PORT=3000\\nNODE_ENV=development\\n\\n
File: src/models/item.ts
:
export interface Item {\\n id: number;\\n name: string;\\n}\\n\\nexport let items: Item[] = [];\\n\\n
We define a simple Item
type and an in-memory array to store items.
File: src/controllers/itemController.ts
:
import { Request, Response, NextFunction } from \'express\';\\nimport { items, Item } from \'../models/item\';\\n\\n// Create an item\\nexport const createItem = (req: Request, res: Response, next: NextFunction) => {\\n try {\\n const { name } = req.body;\\n const newItem: Item = { id: Date.now(), name };\\n items.push(newItem);\\n res.status(201).json(newItem);\\n } catch (error) {\\n next(error);\\n }\\n};\\n\\n// Read all items\\nexport const getItems = (req: Request, res: Response, next: NextFunction) => {\\n try {\\n res.json(items);\\n } catch (error) {\\n next(error);\\n }\\n};\\n\\n// Read single item\\nexport const getItemById = (req: Request, res: Response, next: NextFunction) => {\\n try {\\n const id = parseInt(req.params.id, 10);\\n const item = items.find((i) => i.id === id);\\n if (!item) {\\n res.status(404).json({ message: \'Item not found\' });\\n return;\\n }\\n res.json(item);\\n } catch (error) {\\n next(error);\\n }\\n};\\n\\n// Update an item\\nexport const updateItem = (req: Request, res: Response, next: NextFunction) => {\\n try {\\n const id = parseInt(req.params.id, 10);\\n const { name } = req.body;\\n const itemIndex = items.findIndex((i) => i.id === id);\\n if (itemIndex === -1) {\\n res.status(404).json({ message: \'Item not found\' });\\n return;\\n }\\n items[itemIndex].name = name;\\n res.json(items[itemIndex]);\\n } catch (error) {\\n next(error);\\n }\\n};\\n\\n// Delete an item\\nexport const deleteItem = (req: Request, res: Response, next: NextFunction) => {\\n try {\\n const id = parseInt(req.params.id, 10);\\n const itemIndex = items.findIndex((i) => i.id === id);\\n if (itemIndex === -1) {\\n res.status(404).json({ message: \'Item not found\' });\\n return;\\n }\\n const deletedItem = items.splice(itemIndex, 1)[0];\\n res.json(deletedItem);\\n } catch (error) {\\n next(error);\\n }\\n};\\n\\n
Each controller function includes basic error handling using a try/catch
block, passing errors to the Next middleware.
File: src/routes/itemRoutes.ts
:
import { Router } from \'express\';\\nimport {\\n createItem,\\n getItems,\\n getItemById,\\n updateItem,\\n deleteItem,\\n} from \'../controllers/itemController\';\\n\\nconst router = Router();\\n\\nrouter.get(\'/\', getItems);\\nrouter.get(\'/:id\', getItemById);\\nrouter.post(\'/\', createItem);\\nrouter.put(\'/:id\', updateItem);\\nrouter.delete(\'/:id\', deleteItem);\\n\\nexport default router;\\n\\n
This file defines the RESTful routes for your CRUD operations.
\\n\\nFile: src/middlewares/errorHandler.ts
:
import { Request, Response, NextFunction } from \'express\';\\n\\nexport interface AppError extends Error {\\n status?: number;\\n}\\n\\nexport const errorHandler = (\\n err: AppError,\\n req: Request,\\n res: Response,\\n next: NextFunction\\n) => {\\n console.error(err);\\n res.status(err.status || 500).json({\\n message: err.message || \'Internal Server Error\',\\n });\\n};\\n\\n
This middleware catches errors thrown in your routes/controllers and sends a consistent, type-safe JSON error response.
\\nFile: src/app.ts
:
import express from \'express\';\\nimport itemRoutes from \'./routes/itemRoutes\';\\nimport { errorHandler } from \'./middlewares/errorHandler\';\\n\\nconst app = express();\\n\\napp.use(express.json());\\n\\n// Routes\\napp.use(\'/api/items\', itemRoutes);\\n\\n// Global error handler (should be after routes)\\napp.use(errorHandler);\\n\\nexport default app;\\n\\n
File: src/server.ts
:
import app from \'./app\';\\nimport config from \'./config/config\';\\n\\napp.listen(config.port, () => {\\n console.log(`Server running on port ${config.port}`);\\n});\\n\\n
ESLint and Prettier are essential tools for maintaining code quality and consistency in a TypeScript project. ESLint is a linter that analyzes code for potential errors, stylistic issues, and adherence to best practices, while Prettier is a code formatter that ensures a consistent code style across the entire codebase.
\\nIn the .eslintrc.js
paste in the following code:
module.exports = {\\n parser: \'@typescript-eslint/parser\',\\n plugins: [\'@typescript-eslint\'],\\n extends: [\\n \'eslint:recommended\',\\n \'plugin:@typescript-eslint/recommended\',\\n \'prettier\',\\n ],\\n env: {\\n node: true,\\n es6: true,\\n },\\n};\\n\\n
In .prettierrc
put the following:
{\\n \\"semi\\": true,\\n \\"singleQuote\\": true,\\n \\"trailingComma\\": \\"all\\"\\n}\\n\\n
In your package.json
, add scripts for TypeScript compilation and automatic server restart. For example:
{\\n \\"scripts\\": {\\n \\"build\\": \\"tsc\\",\\n \\"start\\": \\"node dist/server.js\\",\\n \\"dev\\": \\"nodemon --watch \'src/**/*.ts\' --exec \'ts-node\' src/server.ts\\",\\n \\"lint\\": \\"eslint \'src/**/*.ts\'\\"\\n },\\n ...\\n}\\n\\n
tsc --watch
— For continuous compilation in development.nodemon
— To automatically restart your server when files change.Start the server:
\\nnpm run dev\\n\\n
Your Express API is now running with TypeScript:
Create an item to send a POST
request with a JSON payload to the /api/items
endpoint:
curl -X POST http://localhost:3000/api/items \\\\\\n -H \\"Content-Type: application/json\\" \\\\\\n -d \'{\\"name\\": \\"Sample Item\\"}\'\\n\\n
Update an item:
\\ncurl -X PUT http://localhost:3000/api/items/1234567890 \\\\\\n -H \\"Content-Type: application/json\\" \\\\\\n -d \'{\\"name\\": \\"Updated Item Name\\"}\'\\n\\n
These commands assume your server is running on port 3000
and the routes are defined as described in the project setup. Adjust the item ID and JSON data as needed.
Below is an article that explains how to set up testing with Jest in your TypeScript Node.js Express project. This guide builds on the CRUD API project structure we discussed earlier.
\\nTesting is a significant part of the software development lifecycle. It helps ensure that your application behaves as expected and makes your code more maintainable. In this section, we’ll walk through setting up testing using Jest in a TypeScript-based Node.js Express project.
\\nJest is a popular testing framework maintained by Facebook. It offers several benefits:
\\nts-jest
First, you’ll need to install Jest along with the TypeScript preprocessor ts-jest
and type definitions for Jest. Run the following command:
npm install --save-dev jest ts-jest @types/jest\\n\\n
This command adds Jest as a development dependency along with everything needed to run tests written in TypeScript.
\\nNext, configure Jest for your project. Create a jest.config.js
file in the root directory of your project:
module.exports = {\\n preset: \'ts-jest\',\\n testEnvironment: \'node\',\\n moduleFileExtensions: [\'ts\', \'js\'],\\n testMatch: [\'**/tests/**/*.test.(ts|js)\'],\\n globals: {\\n \'ts-jest\': {\\n tsconfig: \'tsconfig.json\',\\n },\\n },\\n};\\n\\n
This configuration tells Jest to:
\\nts-jest
to process TypeScript filestests
folder with names ending in .test.ts
or .test.js
A common approach for organizing your test files is to create a separate folder for tests:
\\nproject/\\n├── src/\\n│ ├── controllers/\\n│ │ └── itemController.ts\\n│ ├── middlewares/\\n│ │ └── errorHandler.ts\\n│ └── ... \\n├── tests/\\n│ └── itemController.test.ts\\n└── ...\\n\\n
This organization keeps your tests separate from your production code.
\\n\\nLet’s create a simple test for our CRUD API. For demonstration purposes, we’ll write a test for the controller that fetches all items. Assume we have a basic controller function in src/controllers/itemController.ts
that looks like this:
import { Request, Response, NextFunction } from \'express\';\\nimport { items } from \'../models/item\';\\n\\nexport const getItems = (req: Request, res: Response, next: NextFunction) => {\\n try {\\n res.json(items);\\n } catch (error) {\\n next(error);\\n }\\n};\\n\\n
Now, create a test file at tests/itemController.test.ts
:
import { Request, Response } from \'express\';\\nimport { getItems } from \'../src/controllers/itemController\';\\nimport { items } from \'../src/models/item\';\\n\\ndescribe(\'Item Controller\', () => {\\n it(\'should return an empty array when no items exist\', () => {\\n // Create mock objects for Request, Response, and NextFunction\\n const req = {} as Request;\\n const res = {\\n json: jest.fn(),\\n } as unknown as Response;\\n\\n // Ensure that our in-memory store is empty\\n items.length = 0;\\n\\n // Execute our controller function\\n getItems(req, res, jest.fn());\\n\\n // Expect that res.json was called with an empty array\\n expect(res.json).toHaveBeenCalledWith([]);\\n });\\n});\\n\\n
In this test:
\\nRequest
and Response
objectsgetItems
controller and assert that it responds with an empty arrayTo run your tests easily, add a script to your package.json
:
{\\n \\"scripts\\": {\\n \\"test\\": \\"jest\\",\\n \\"test:watch\\": \\"jest --watch\\"\\n }\\n}\\n\\n
Now, you can run npm test
to execute your tests. The --watch
flag is helpful during development as it reruns tests when files change.
For a smoother development experience, use Jest’s --watch
mode or integrate it with your existing development watchers like nodemon
to run tests automatically as you code.
Below is an improved and expanded section on how to deploy a TypeScript + Express application using Docker.
\\nDeploying your TypeScript + Express application with Docker streamlines the setup process and ensures consistency across environments. Below, we detail the necessary steps, including creating a Dockerfile, setting up a .dockerignore file, and building and running the Docker container.
\\nPlace a Dockerfile
in the root of your project. This file defines your container’s environment and instructions to build your app:
# Use an official lightweight Node.js image.\\nFROM node:18-alpine\\n\\n# Set the working directory in the container.\\nWORKDIR /usr/src/app\\n\\n# Copy package.json and package-lock.json (if available)\\nCOPY package*.json ./\\n\\n# Install dependencies.\\nRUN npm install\\n\\n# Copy the rest of the source code.\\nCOPY . .\\n\\n# Build the project (assuming tsc is configured to output to the \'dist\' folder)\\nRUN npm run build\\n\\n# Expose the port (make sure this matches your config; here we assume 3000)\\nEXPOSE 3000\\n\\n# Start the application.\\nCMD [\\"npm\\", \\"start\\"]\\n\\n
To optimize your Docker image and avoid copying unnecessary files, create a .dockerignore
file in your project root:
node_modules\\nnpm-debug.log\\ndist\\n.env\\n\\n
This file tells Docker which files and directories to ignore when building the container image, reducing build context size.
\\nOnce your Dockerfile and .dockerignore are set up, you can build and run your Docker container using the following commands:
\\nBuild the Docker image:
\\ndocker build -t ts-express-app .\\n\\n
This command builds an image tagged ts-express-app
from the current directory.
Run the Docker container:
\\ndocker run -p 3000:3000 ts-express-app\\n\\n
The -p 3000:3000
flag maps port 3000 of your container to port 3000 on your host machine, allowing you to access your application via http://localhost:3000
.
In a TypeScript project, transpiling or building involves the TypeScript Compiler (TSC) interpreting the tsconfig.json
file to determine how to convert TypeScript files into valid JavaScript.
To compile the code, you must execute the command npm run build
. A new dist directory is created in the project root after successfully executing this command for the first time. Within this directory, you will find the compiled versions of our TypeScript files in the form of valid JavaScript. This compiled JavaScript is essentially what is used in the production environment.
If you designate any other directory as the value for the outDir
field in the tsconfig.json
file, that specified directory would be reflected here instead of dist
.
To improve this process further, set up TypeScript for reliability with strict type checking and configurations that adapt to your needs. Make the most of the tsconfig.json
file by specifying the best-suited production settings for your project. Improve performance with code splitting by utilizing tools like webpack for efficiency and shrinking file sizes with tools like Terser.
As the project expands, ensure code stability through automated testing with tools like Jest and streamline the workflow from development to production with CI/CD pipelines.
\\nIn this guide, we explored how to set up TypeScript with Node.js and Express, focusing on configuring key elements for a smooth development experience. We created a server, configured ts-node
, and used nodemon for hot reloading to streamline the workflow. We also saw how to handle unit testing on the API endpoints then we finally deployed on Docker.
Using TypeScript has its benefits, but it does come with a bit of a learning curve. You have to carefully analyze whether using TypeScript in your Node.js and Express backend projects is beneficial or not, which may depend on the requirements of your project.
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\ngetFilteredRowModel
\\n getSortedRowModel
\\n getGroupedRowModel
\\n TanStack Table, formerly known as React Table, is a headless UI for building tables and datagrids across multiple frameworks, including React, Solid, Vue, and even React Native. Being “headless” means it doesn’t provide pre-built components or styles, giving you full control over markup and design. It’s perfect if you want a customizable and lightweight table solution that can be used with any JavaScript framework, thanks to it being framework agnostic.
\\nIn July 2022, Tanner Linsley, creator of TanStack, announced the release of TanStack Table, which offers a major upgrade from React Table v7. TanStack Table v8 was completely rewritten in TypeScript to be more performant and feature-rich, while also expanding support to frameworks like Vue, Solid, and Svelte.
\\nThe react-table NPM package is no longer stable or maintained. The new version is published under the @tanstack scope, with the @tanstack/react-table adapter handling React-specific state management and rendering.
\\nEditor’s note: This article was last updated by Saleh Mubashar in March 2025 to explain TanStack Table’s emergence over the now-outdated React Table, and provide a direct comparison between TanStack Table, Material React Table, and Material UI table.
\\nTables are useful in React for displaying structured data, such as financial reports, sports leaderboards, and pricing comparisons:
\\nPopular products like Airtable, Asana List View, Google Sheets, and Notion rely heavily on tables, while major companies like Google, Apple, and Microsoft use TanStack Table in their applications.
\\nBelow, we’ve listed the most important features present in TanStack table. These are some of the many reasons why TanStack Table is one of the top React table libraries:
\\nIn case you are still wondering which React table library to use, you’ll find a detailed comparison with Material UI Table and Material React Table later in the article.
\\nIf you’re upgrading from React Table v7 to TanStack Table v8, follow this migration guide. Below is a summary of the key steps.
\\nStart by uninstalling React Table and installing TanStack Table using the following commands:
\\nnpm uninstall react-table @types/react-table\\nnpm install @tanstack/react-table\\n\\n
Next, rename all instances of useTable
to useReactTable
and update the syntax where needed. Table options like disableSortBy
have been renamed to enableSorting
. Column definitions now use accessorKey
for strings and accessorFn
for functions, while Header
has been renamed to header
.
Markup changes include replacing cell.render(\'Cell\')
with flexRender()
, manually defining props like colSpan
and key
, and using getValue()
instead of value
for cell rendering. Column definitions have been reorganized, and custom filters now return a boolean instead of filtering rows directly. Overall, the migration requires minor adjustments, but the core concept and functionality remain the same.
If you are starting from scratch, the installation process is quite straightforward. Install the TanStack Table adapter for React using the following command:
\\nnpm install @tanstack/react-table\\n\\n
The @tanstack/react-table
adapter is a wrapper around the core table logic. It will provide a number of Hooks and types to manage the table state. The package works with React 16.8 and later, including React 19 (though compatibility with the upcoming React Compiler may change in future updates).
Let’s create a basic table using TanStack Table and some dummy data. We first define the data and column structure:
\\nimport * as React from \\"react\\";\\nimport {\\n createColumnHelper,\\n flexRender,\\n getCoreRowModel,\\n useReactTable,\\n} from \\"@tanstack/react-table\\";\\nimport \\"./styles.css\\";\\n\\n// Define the type for our table data\\ntype Person = {\\n name: string;\\n age: number;\\n status: string;\\n};\\n\\n// Sample dataset\\nconst data: Person[] = [\\n { name: \\"Alice\\", age: 25, status: \\"Active\\" },\\n { name: \\"Bob\\", age: 30, status: \\"Inactive\\" },\\n { name: \\"Charlie\\", age: 35, status: \\"Pending\\" },\\n];\\n\\n// Create a column helper to ensure type safety\\nconst columnHelper = createColumnHelper<Person>();\\n\\n// Define columns for the table\\nconst columns = [\\n columnHelper.accessor(\\"name\\", {\\n header: \\"Name\\",\\n cell: (info) => info.getValue(),\\n }),\\n columnHelper.accessor(\\"age\\", {\\n header: \\"Age\\",\\n cell: (info) => info.getValue(),\\n }),\\n columnHelper.accessor(\\"status\\", {\\n header: \\"Status\\",\\n cell: (info) => info.getValue(),\\n }),\\n];\\n\\n
Now, we use useReactTable
to manage the table’s data and structure.
export default function App() {\\n const table = useReactTable({\\n data,\\n columns,\\n getCoreRowModel: getCoreRowModel(),\\n });\\n\\n return (\\n <div>\\n <table>\\n <thead>\\n {table.getHeaderGroups().map((headerGroup) => (\\n <tr key={headerGroup.id}>\\n {headerGroup.headers.map((header) => (\\n <th key={header.id}>\\n {flexRender(header.column.columnDef.header, header.getContext())}\\n </th>\\n ))}\\n </tr>\\n ))}\\n </thead>\\n <tbody>\\n {table.getRowModel().rows.map((row) => (\\n <tr key={row.id}>\\n {row.getVisibleCells().map((cell) => (\\n <td key={cell.id}>\\n {flexRender(cell.column.columnDef.cell, cell.getContext())}\\n </td>\\n ))}\\n </tr>\\n ))}\\n </tbody>\\n </table>\\n </div>\\n );\\n}\\n\\n
You can view the demo here. Here’s the basic rundown of how everything works. You can always read the docs for a detailed explanation of individual Hooks/properties:
\\ncreateColumnHelper
ensures type safety when defining column headers and cell contentuseReactTable
sets up the table structuretable.getHeaderGroups()
generates table headers, while table.getRowModel().rows
dynamically renders table rowsInstead of using static data, we can populate our table dynamically using an API. Here is the TanStack Table example we’ll be working with.
\\nFor our application, we’ll use Axios to retrieve movie information with the search term snow
from the TVMAZE API. Below is the endpoint for this operation:
https://api.tvmaze.com/search/shows?q=snow\\n\\n
To call the API, let’s install Axios:
\\nnpm install axios\\n\\n
Modify App.tsx
to fetch data when the component loads:
import { useEffect, useState } from \\"react\\";\\nimport axios from \\"axios\\";\\nimport \\"./App.css\\";\\n\\nfunction App() {\\n const [data, setData] = useState([]);\\n\\n const fetchData = async () => {\\n try {\\n const response = await axios.get(\\"https://api.tvmaze.com/search/shows?q=snow\\");\\n setData(response.data);\\n } catch (error) {\\n console.error(\\"Error fetching data:\\", error);\\n }\\n };\\n\\n useEffect(() => {\\n fetchData();\\n }, []);\\n\\n return <></>;\\n}\\n\\nexport default App;\\n\\n
Above, we created a state called data
. Once the component gets mounted, we fetch movie content from the TVMAZE API using Axios and save the returned result in the data
state variable.
This API returns an array of TV shows, each containing properties like name
, type
, and language
:
// the API gives an array of TV Shows. Here is one item:\\n// we will later use this response to create a TypeScript object..\\n//that will help us model the Show interface\\n[\\n {\\n \\"score\\": 0.86069167,\\n \\"show\\": {\\n \\"id\\": 10412,\\n \\"url\\": \\"...\\",\\n \\"name\\": \\"Snow\\",\\n \\"type\\": \\"Scripted\\",\\n \\"language\\": \\"English\\",\\n \\"genres\\": [\\n \\"Comedy\\"\\n ],\\n \\"status\\": \\"...\\",\\n \\"runtime\\": 120,\\n \\"averageRuntime\\": 120,\\n \\"premiered\\": \\"...\\",\\n \\"ended\\": \\"...\\",\\n \\"officialSite\\": \\"..\\",\\n \\"schedule\\": {\\n \\"time\\": \\"..\\",\\n \\"days\\": [\\n \\"..\\"\\n ]\\n },\\n \\"rating\\": {\\n \\"average\\": null\\n },\\n \\"weight\\": 40,\\n \\"network\\": {\\n \\"id\\": 26,\\n \\"name\\": \\"..\\",\\n \\"country\\": {\\n \\"name\\": \\"..\\",\\n \\"code\\": \\"..\\",\\n \\"timezone\\": \\"...\\"\\n },\\n \\"officialSite\\": \\"..\\"\\n },\\n \\"webChannel\\": null,\\n \\"dvdCountry\\": null,\\n \\"externals\\": {\\n \\"tvrage\\": null,\\n \\"thetvdb\\": null,\\n \\"imdb\\": null\\n },\\n \\"image\\": {\\n \\"medium\\": \\"...\\"\\n },\\n \\"summary\\": \\"...\\",\\n \\"updated\\": 1670595447,\\n \\"_links\\": {\\n \\"self\\": {\\n \\"href\\": \\"..\\"\\n },\\n \\"previousepisode\\": {\\n \\"href\\": \\"...\\",\\n \\"name\\": \\"...\\"\\n }\\n }\\n }\\n },\\n //other TV shows..\\n]\\n\\n
The data
prop is the data we got through the API call, and columns
will be an object that configures our table columns.
In the /src
folder, create a new Table.tsx
file and paste the following code:
// src/Table.tsx\\n\\n//create a Show object for TypeScript(see API response above for reference):\\n//only these properties are relevant to us:\\n\\nexport type Show = {\\n show: {\\n status: string;\\n name: string;\\n type: string;\\n language: string;\\n genres: string[];\\n runtime: number;\\n };\\n};\\n\\n// now create types for props for this Table component(https://tanstack.com/table/latest/docs/framework/react/examples/sub-components)\\ntype TableProps<TData> = {\\n data: TData[];\\n columns: GroupColumnDef<TData>[];\\n};\\n\\nexport default function Table({ columns, data }:TableProps<Show>) {\\n // Table component logic and UI come here\\n return <></>;\\n}\\n\\n
Let’s modify the content in App.tsx
to include the columns for our table and also render the Table
component:
// src/App.tsx\\nimport { useEffect, useMemo, useState } from \\"react\\";\\nimport axios from \\"axios\\";\\nimport { createColumnHelper } from \\"@tanstack/react-table\\";\\nimport Table, { Show } from \\"./Table\\";\\n\\nfunction App() {\\n const [data, setData] = useState<Show[]>();\\n const columnHelper = createColumnHelper<Show>();\\n //define our table headers and data\\n const columns = useMemo(\\n () => [\\n //create a header group:\\n columnHelper.group({\\n id: \\"tv_show\\",\\n header: () => <span>TV Show</span>,\\n //now define all columns within this group\\n columns: [\\n columnHelper.accessor(\\"show.name\\", {\\n header: \\"Name\\",\\n cell: (info) => info.getValue(),\\n }),\\n columnHelper.accessor(\\"show.type\\", {\\n header: \\"Type\\",\\n cell: (info) => info.getValue(),\\n }),\\n ],\\n }),\\n //create another group:\\n columnHelper.group({\\n id: \\"details\\",\\n header: () => <span> Details</span>,\\n columns: [\\n columnHelper.accessor(\\"show.language\\", {\\n header: \\"Language\\",\\n cell: (info) => info.getValue(),\\n }),\\n columnHelper.accessor(\\"show.genres\\", {\\n header: \\"Genres\\",\\n cell: (info) => info.getValue(),\\n }),\\n columnHelper.accessor(\\"show.runtime\\", {\\n header: \\"Runtime\\",\\n cell: (info) => info.getValue(),\\n }),\\n columnHelper.accessor(\\"show.status\\", {\\n header: \\"Status\\",\\n cell: (info) => info.getValue(),\\n }),\\n ],\\n }),\\n ],\\n [],\\n );\\n const fetchData = async () => {\\n const result = await axios(\\"https://api.tvmaze.com/search/shows?q=snow\\");\\n setData(result.data);\\n };\\n useEffect(() => {\\n fetchData();\\n }, []);\\n\\n return <>{data && <Table columns={columns} data={data} />}</>;\\n}\\n\\nexport default App;\\n\\n
In the code above, we used the useMemo
Hook to create a memoized array of columns; we defined two level headers, each with different columns for our table heads.
We’ve set up all of the columns to have an accessor, which is the data returned by the TVMAZE API set to data
.
Now, let’s finish our Table
component:
// src/Table.tsx\\n//extra code removed for brevity..\\nimport {\\n flexRender,\\n getCoreRowModel,\\n GroupColumnDef,\\n useReactTable,\\n} from \\"@tanstack/react-table\\";\\n\\nexport default function Table({ columns, data }: TableProps<Show>) {\\n //use the useReact table Hook to build our table:\\n const table = useReactTable({\\n //pass in our data\\n data,\\n columns,\\n getCoreRowModel: getCoreRowModel(),\\n });\\n // Table component logic and UI come here\\n return (\\n <div>\\n <table>\\n <thead>\\n {/*use the getHeaderGRoup function to render headers:*/}\\n {table.getHeaderGroups().map((headerGroup) => (\\n <tr key={headerGroup.id}>\\n {headerGroup.headers.map((header) => (\\n <th key={header.id} colSpan={header.colSpan}>\\n {header.isPlaceholder\\n ? null\\n : flexRender(\\n header.column.columnDef.header,\\n header.getContext(),\\n )}\\n </th>\\n ))}\\n </tr>\\n ))}\\n </thead>\\n <tbody>\\n {/*Now render the cells*/}\\n {table.getRowModel().rows.map((row) => (\\n <tr key={row.id}>\\n {row.getVisibleCells().map((cell) => (\\n <td key={cell.id}>\\n {flexRender(cell.column.columnDef.cell, cell.getContext())}\\n </td>\\n ))}\\n </tr>\\n ))}\\n </tbody>\\n </table>\\n </div>\\n );\\n}\\n\\n
Above, we passed columns
and data
to useReactTable
. The useReactTable
Hook will return the necessary props for the table, body, and the transformed data to create the header and cells. React will then generate the header by iterating through the headers using the getHeaderGroups
function, and the table body’s rows will be generated via the getRowModel()
function.
You’ll also notice that the genre
field is an array, but React will render it to a comma-separated string in our final output.
If we run our application at this point, we should get the following output:
\\nWhile this table is adequate for most applications, what if we require custom styles? With TanStack Table, you can define custom styles for each cell; it’s possible to define styles in the column
object, as shown below.
For example, let’s make a badge-like custom component to display each genre:
\\n// src/App.tsx\\n//extra code removed for brevity\\ntype GenreProps = {\\n genres: string[];\\n};\\n\\nconst Genres = ({ genres }: GenreProps) => {\\n // Loop through the array and create a badge-like component instead of a comma-separated string\\n return (\\n <>\\n {genres.map((genre, idx) => {\\n return (\\n <span\\n key={idx}\\n style={{\\n backgroundColor: \\"green\\",\\n marginRight: 4,\\n padding: 3,\\n borderRadius: 5,\\n }}\\n >\\n {genre}\\n </span>\\n );\\n })}\\n </>\\n );\\n};\\n\\n//this function will convert runtime(minutes) into hours and minutes\\nfunction convertToHoursAndMinutes(runtime: number) {\\n const hour = Math.floor(runtime / 60);\\n const min = Math.floor(runtime % 60);\\n return `${hour} hour(s) and ${min} minute(s)`;\\n}\\n\\nfunction App() {\\n const columns = useMemo(\\n () => [\\n //...\\n columnHelper.group({\\n //..\\n columns: [\\n ,\\n //...\\n columnHelper.accessor(\\"show.genres\\", {\\n header: \\"Genres\\",\\n //render the Genres component here:\\n cell: (info) => <Genres genres={info.getValue()} />,\\n }),\\n columnHelper.accessor(\\"show.runtime\\", {\\n header: \\"Runtime\\",\\n //use our convertToHoursAndMinutes function to render the runtime of the show\\n cell: (info) => convertToHoursAndMinutes(info.getValue()),\\n }),\\n //...\\n ],\\n }),\\n ],\\n [],\\n );\\n //further code..\\n}\\n\\n//...\\n\\n
We updated the Genres
column above by iterating and sending its values to a custom component, creating a badge-like element. We also changed the runtime
column to show the watch hour and minute based on the time. Following this step, our table UI should look like the following:
As you can see, TanStack Table has successfully styled our table with relative ease! If you need more help rendering custom cells, refer to the documentation.
\\nWe’ve seen how we can customize the styles for each cell based on our needs; you can show any custom element for each cell based on the data value.
\\ngetFilteredRowModel
Using the guide for global filtering, we can extend our table by adding global search capabilities. The getFilteredRowModel
property in the useReactTable
Hook will let TanStack Table know that we want to implement filtering in our project.
First, let’s create a search input in Table.tsx
:
// src/Table.tsx\\n//Create a searchbar:\\nfunction Searchbar({\\n value: initialValue,\\n onChange,\\n ...props\\n}: {\\n value: string | number;\\n onChange: (value: string | number) => void;\\n} & Omit<React.InputHTMLAttributes<HTMLInputElement>, \\"onChange\\">) {\\n const [value, setValue] = useState(initialValue);\\n useEffect(() => {\\n setValue(initialValue);\\n }, [initialValue]);\\n //if the entered value changes, run the onChange handler once again.\\n useEffect(() => {\\n onChange(value);\\n }, [value]);\\n //render the basic searchbar:\\n return (\\n <input\\n {...props}\\n value={value}\\n onChange={(e) => setValue(e.target.value)}\\n />\\n );\\n}\\n\\nexport default function Table({ columns, data }: TableProps<Show>) {\\n const [globalFilter, setGlobalFilter] = useState(\\"\\");\\n const table = useReactTable({\\n data,\\n columns,\\n getCoreRowModel: getCoreRowModel(),\\n filterFns: {},\\n state: {\\n globalFilter, //specify our global filter here\\n },\\n onGlobalFilterChange: setGlobalFilter, //if the filter changes, change the hook value\\n globalFilterFn: \\"includesString\\", //type of filtering\\n getFilteredRowModel: getFilteredRowModel(), //row model to filter the table\\n });\\n return (\\n <div>\\n {/*Render the searchbar:*/}\\n <Searchbar\\n value={globalFilter ?? \\"\\"}\\n onChange={(value) => setGlobalFilter(String(value))}\\n placeholder=\\"Search all columns...\\"\\n />\\n {/*Further code..*/}\\n </div>\\n );\\n}\\n\\n
Let’s break this code down:
\\nSearchbar
component. As the name suggests, this component will send user input to TanStack Table. The library will then filter the table rows to match the user inputgetFilteredRowModel
method and passed our globalFilter
Hook to the state
property in the useReactTable
HookSearchBar
componentThis will be the result:
\\nThanks to TanStack Table, column searching is also available to help the user filter out a specific column.
\\nThe code snippet below introduces column searching functionality in our app:
\\nimport { ColumnFiltersState } from \\"@tanstack/react-table\\";\\n\\nexport default function Table({ columns, data }: TableProps<Show>) {\\n const [columnFilters, setColumnFilters] = useState<ColumnFiltersState>([]);\\n const table = useReactTable({\\n //....\\n state: {\\n columnFilters,\\n globalFilter,\\n },\\n onColumnFiltersChange: setColumnFilters,\\n //...\\n });\\n // Table component logic and UI come here\\n return (\\n <div className=\\"p-2\\">\\n {/*...further code..*/}\\n <table>\\n <thead>\\n {table.getHeaderGroups().map((headerGroup) => (\\n <tr key={headerGroup.id}>\\n {headerGroup.headers.map((header) => (\\n <th key={header.id} colSpan={header.colSpan}>\\n {header.isPlaceholder\\n ? null\\n : flexRender(\\n header.column.columnDef.header,\\n header.getContext(),\\n )}\\n <div>\\n {/*If the column can be filtered, render the Filter component.*/}\\n {header.column.getCanFilter() ? (\\n <div>\\n <Filter column={header.column} />\\n </div>\\n ) : null}\\n </div>\\n </th>\\n ))}\\n </tr>\\n ))}\\n </thead>\\n {/*further code..*/}\\n </table>\\n <div className=\\"h-4\\" />\\n </div>\\n );\\n}\\n//create a Filter component to use for column searching:\\nfunction Filter({ column }: { column: Column<Show, unknown> }) {\\n const columnFilterValue = column.getFilterValue();\\n\\n return (\\n <Searchbar\\n onChange={(value) => {\\n column.setFilterValue(value);\\n }}\\n placeholder={`Search...`}\\n type=\\"text\\"\\n value={(columnFilterValue ?? \\"\\") as string}\\n />\\n );\\n}\\n\\n
If you notice, the code above is similar to that of its global counterpart. The only difference is that we’re rendering a Filter
component for those columns, which can be filtered.
This will be the result:
\\nThese are very basic examples for filters, and the TanStack Table API provides several options. Be sure to check out the API documentation for more information.
\\ngetSortedRowModel
Let’s implement one more basic functionality for our table: sorting. TanStack Table allows sorting via the getSortedRowModel
method:
// src/Table.tsx\\nimport { SortingState, getSortedRowModel } from \\"@tanstack/react-table\\";\\nconst [sorting, setSorting] = useState<SortingState>([]);\\nconst table = useReactTable({\\n //...\\n getSortedRowModel: getSortedRowModel(),\\n onSortingChange: setSorting,\\n state: {\\n sorting,\\n },\\n //...\\n});\\n\\nreturn (\\n <div>\\n {/*Extra code to render table and logic..(removed for brevity)*/}\\n <thead>\\n {table.getHeaderGroups().map((headerGroup) => (\\n <tr key={headerGroup.id}>\\n {headerGroup.headers.map((header) => {\\n return (\\n <th key={header.id} colSpan={header.colSpan}>\\n {header.isPlaceholder ? null : (\\n <div\\n //when clicked, check if it can be sorted\\n //if it can, then sort this column\\n onClick={header.column.getToggleSortingHandler()}\\n title={\\n header.column.getCanSort()\\n ? header.column.getNextSortingOrder() === \\"asc\\"\\n ? \\"Sort ascending\\"\\n : header.column.getNextSortingOrder() === \\"desc\\"\\n ? \\"Sort descending\\"\\n : \\"Clear sort\\"\\n : undefined\\n }\\n >\\n {flexRender(\\n header.column.columnDef.header,\\n header.getContext(),\\n )}\\n {{\\n //display a relevant icon for sorting order:\\n asc: \\" 🔼\\",\\n desc: \\" 🔽\\",\\n }[header.column.getIsSorted() as string] ?? null}\\n </div>\\n )}\\n </th>\\n );\\n })}\\n </tr>\\n ))}\\n </thead>\\n </div>\\n);\\n\\n
Here, we passed in our getSortedRowModel
and onSortingChange
properties to inform the library that we want to add sorting to our project.
After our sorting implementation, the UI looks like the following:
\\nAs you can see, the user can now click to enable sorting for any column. You can disable the sorting functionality for certain columns via the enableSorting
flag:
//column related data in src/App.tsx:\\n\\ncolumns: [\\n columnHelper.accessor(\\"show.name\\", {\\n header: \\"Name\\",\\n cell: (info) => info.getValue(),\\n enableSorting: false, //disable sorting for this one\\n }),\\n columnHelper.accessor(\\"show.type\\", {\\n header: \\"Type\\",\\n cell: (info) => info.getValue(),\\n }),\\n]\\n//...\\n\\n
getGroupedRowModel
We can even add a grouping feature using the getGroupedRowModel
method. This is great for cases where users want to group columns according to a certain category.
To start, first add the aggregationFn
property to the show.language
, show.name
, and show.type
columns:
//src/App.tsx\\nconst columns = useMemo(\\n () => [\\n columnHelper.group({\\n //...\\n columns: [\\n columnHelper.accessor(\\"show.name\\", {\\n //...\\n aggregationFn: \\"count\\",\\n }),\\n columnHelper.accessor(\\"show.type\\", {\\n //...\\n aggregationFn: \\"count\\",\\n }),\\n ],\\n }),\\n columnHelper.group({\\n //...\\n columns: [\\n columnHelper.accessor(\\"show.language\\", {\\n //...\\n aggregationFn: \\"count\\",\\n }),\\n ],\\n }),\\n ],\\n //extra code removed for brevity..\\n [],\\n);\\n\\n
Here, we’re setting the aggregationFn
property to count
. This tells React to just use count-based aggregation for those columns.
Next, make these changes to the Table.tsx
component:
//src/Table.tsx\\nimport { GroupingState, getGroupedRowModel, getExpandedRowModel } from \\"@tanstack/react-table\\";\\n\\nconst [grouping, setGrouping] = useState<GroupingState>([]);\\n\\nconst table = useReactTable({\\n //..\\n getExpandedRowModel: getExpandedRowModel(),\\n getGroupedRowModel: getGroupedRowModel(),\\n onGroupingChange: setGrouping,\\n state: {\\n //..\\n grouping,\\n },\\n});\\nreturn (\\n <div>\\n {/*Further code..*/}\\n <thead>\\n {table.getHeaderGroups().map((headerGroup) => (\\n <tr key={headerGroup.id}>\\n {headerGroup.headers.map((header) => {\\n return (\\n <th key={header.id} colSpan={header.colSpan}>\\n {header.isPlaceholder ? null : (\\n <div>\\n {header.column.getCanGroup() ? (\\n // If the header can be grouped, let\'s add a toggle\\n <button\\n {...{\\n onClick: header.column.getToggleGroupingHandler(),\\n style: {\\n cursor: \\"pointer\\",\\n },\\n }}\\n >\\n {header.column.getIsGrouped()\\n ? `(grouped): `\\n : `(ungrouped):`}\\n </button>\\n ) : null}{\\" \\"}\\n {flexRender(\\n header.column.columnDef.header,\\n header.getContext(),\\n )}\\n </div>\\n )}\\n </th>\\n );\\n })}\\n </tr>\\n ))}\\n </thead>\\n {/*Further code..*/}\\n </div>\\n);\\n\\n
Here’s what’s happening in this code block:
\\ngetExpandedRowModel
and getGroupedRowModel
properties to help us use sorting in our projectbutton
for every column. When clicked, it will trigger the header.column.getToggleGroupingHandler()
function. This will toggle grouping for the selected columnheader.column.getIsGrouped()
methodThis will be the result:
\\nTanstack Table also provides a ColumnSizing
API to help users resize table columns. This is great for situations where a certain row has to be expanded to make their table use up extra available width on the screen.
This code block demonstrates how to implement resizing functionality:
\\n//src/Table.tsx\\nconst [columnResizeMode, setColumnResizeMode] =\\n React.useState<ColumnResizeMode>(\\"onChange\\");\\n\\nconst [columnResizeDirection, setColumnResizeDirection] =\\n React.useState<ColumnResizeDirection>(\\"ltr\\");\\n\\nconst table = useReactTable({\\n data,\\n columns,\\n columnResizeMode, //specify that we\'ll use resizing in this table\\n columnResizeDirection,\\n getCoreRowModel: getCoreRowModel(),\\n debugTable: true,\\n debugHeaders: true,\\n debugColumns: true,\\n});\\nreturn (\\n <div style={{ direction: table.options.columnResizeDirection }}>\\n <table\\n {...{\\n style: {\\n width: table.getCenterTotalSize(),\\n },\\n }}\\n >\\n <thead>\\n {table.getHeaderGroups().map((headerGroup) => (\\n <tr key={headerGroup.id}>\\n {headerGroup.headers.map((header) => (\\n <th\\n {...{\\n key: header.id,\\n colSpan: header.colSpan,\\n style: {\\n width: header.getSize(),\\n },\\n }}\\n >\\n {header.isPlaceholder\\n ? null\\n : flexRender(\\n header.column.columnDef.header,\\n header.getContext(),\\n )}\\n <div\\n {...{\\n onDoubleClick: () => header.column.resetSize(),\\n //when held down, enable resizing functionality.\\n onMouseDown: header.getResizeHandler(),\\n onTouchStart: header.getResizeHandler(),\\n className: `resizer ${\\n table.options.columnResizeDirection\\n } ${header.column.getIsResizing() ? \\"isResizing\\" : \\"\\"}`,\\n style: {\\n //keep on increasing/decreasing the column till resize mode is finished.\\n transform:\\n columnResizeMode === \\"onEnd\\" &&\\n header.column.getIsResizing()\\n ? `translateX(${\\n (table.options.columnResizeDirection === \\"rtl\\"\\n ? -1\\n : 1) *\\n (table.getState().columnSizingInfo.deltaOffset ??\\n 0)\\n }px)`\\n : \\"\\",\\n },\\n }}\\n />\\n </th>\\n ))}\\n </tr>\\n ))}\\n </thead>\\n <tbody>\\n {table.getRowModel().rows.map((row) => (\\n <tr key={row.id}>\\n {row.getVisibleCells().map((cell) => (\\n <td\\n {...{\\n key: cell.id,\\n style: {\\n //set the width of this column\\n width: cell.column.getSize(),\\n },\\n }}\\n >\\n {flexRender(cell.column.columnDef.cell, cell.getContext())}\\n </td>\\n ))}\\n </tr>\\n ))}\\n </tbody>\\n </table>\\n </div>\\n);\\n\\n
The explanation of the code is in the comments.
\\nLet’s test it out! This will be the output:
\\n\\n | TanStack Table | \\nMaterial React Table | \\nMaterial UI Table | \\n
---|---|---|---|
Type | \\nHeadless table library for building tables and data grids — framework agnostic | \\nUI component library built on TanStack Table and using Material UI V6 design principles | \\nA component within the broader Material UI (MUI) React component library | \\n
UI framework | \\nNo built-in UI | \\nUses Material UI V6 with Emotion styling | \\nUses Material UI styling | \\n
Customization | \\nFully customizable, requires manual UI implementation | \\nPre-styled with Material UI but still customizable | \\nLimited customization using MUI styling options | \\n
Framework support | \\nWorks with TS/JS, React, Vue, Solid, Qwik, and Svelte | \\nReact-only | \\nReact-only | \\n
Dependencies | \\nFully independent | \\nRequires Material UI and Emotion as peer dependencies | \\nRequires Material UI | \\n
Advanced features | \\nFiltering, sorting, resizing, pinning, reordering, visibility control, grouping, aggregation | \\nFiltering, sorting, resizing, pinning, reordering, grouping, aggregation | \\nBasic table columns with manual customization | \\n
TanStack Table replaced React Table in July 2022, offering a TypeScript rewrite for better performance and multi-framework support. The react-table
package is deprecated, with React-specific features now in @tanstack/react-table
.
TanStack Table is a headless table library for building tables and data grids for any JavaScript framework, while Material React Table is a component library built on top of TanStack Table v8’s API. So basically, it is a combination of TanStack functionality and Material UI v6 design.
\\nTanStack Table is a headless table library for building tables and data grids for any JavaScript framework, while Material UI Table is a component within the MUI React component library.
\\nYou can use @tanstack/react-table
for a powerful, customizable table or the native HTML <table>
element for simple use cases. For styled tables, you can also look at libraries like Material React Table, Material UI Table, or React Bootstrap.
In this article, we learned how to build a table UI using React and TanStack Table. It’s not difficult to create your own table for basic use cases, but make sure you’re not reinventing the wheel wherever possible.
\\nI hope you enjoyed learning about table UIs. Let me know about your experience with tables in the comments below.
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\n[[Prototype]]
vs prototype
property\\n In JavaScript, you’ve probably worked with objects quite a bit. You’ve created and modified them, maybe added some methods, and accessed their properties.
\\nBut have you ever wondered how objects can share behaviors, or why specific methods like toString()
are available on every object you create, even though you didn’t define them yourself? That’s where JavaScript prototypes come in.
In this guide, we’ll explore what prototypes are, how the prototype chain works, and how to use this chain to create inheritance between objects. We’ll also look at the modern class syntax introduced in ES6, which serves as a cleaner way to work with prototypes.
\\nA JavaScript prototype is the mechanism that allows one object to inherit properties and methods from another. This is known as prototype-based inheritance and is a fundamental part of how JavaScript works.
\\nWhere it can get a bit confusing is how this all works under the hood — particularly the distinction between an object’s internal [[Prototype]]
and the prototype
property. Let’s break that down.
[[Prototype]]
vs prototype
propertyAn object’s prototype, often called [[Prototype]]
, is an internal reference that allows one object to access properties and methods defined on another object. This enables behavior to be inherited through what’s known as the prototype chain.
Let’s see how this works with a plain object:
\\nconst book = {\\n title: \'book_one\',\\n genre: \'sci-fi\',\\n author: \'Ibas Majid\',\\n bookDetails: function () {\\n return `Name: ${this.author} | Title: ${this.title} | Genre: ${this.genre}.`;\\n },\\n};\\nconsole.log(book.bookDetails());\\n// Output: \\"Name: Ibas Majid | Title: book_one | Genre: sci-fi.\\"\\n\\n
If you inspect the object in the browser DevTools, you’ll notice it has an internal [[Prototype]]
, which links to additional properties and methods provided by JavaScript’s built-in Object
:
Now, if you type book.
in the browser console, you’ll see not only the properties and methods we defined, but also built-in methods like toString()
, hasOwnProperty()
, and more:
This happens because when an object is created using literal syntax {}
, like the book
object, JavaScript automatically sets its internal [[Prototype]]
to reference Object.prototype
. That’s where these built-in methods come from; they’re inherited through the prototype chain.
While these methods are defined on Object.prototype
(i.e., the prototype
property of the object constructor), the book
object gains access to them via its internal [[Prototype]]
link. If you type Object.prototype.
in the console, you’ll see the same methods available to book
, thanks to this inheritance:
In our book
object, we didn’t explicitly define a toString()
method, yet calling it still returns a value:
console.log(book.toString()); // [object Object]\\n\\n
When a method or property is accessed on an object, JavaScript first checks if it exists on that object directly. Since toString()
isn’t defined on the book
object, it follows the internal [[Prototype]]
reference, which points to Object.prototype
and finds the method there.
If the method isn’t found on the Object.prototype
, JavaScript continues up the prototype chain until it either finds the method or reaches null
, which marks the end of the chain. If the search reaches null
without success, JavaScript throws an error for methods or returns undefined
for properties:
console.log(book.toNotAvailable());\\n// Uncaught TypeError: book.toNotAvailable is not a function\\n\\n
The prototype chain in this case looks like this:
\\nbook → Object.prototype → null\\n\\n
As seen, the Object.prototype
sits at the top of the prototype chain, and every object in JavaScript ultimately inherits from it.
Constructor functions allow us to create reusable templates for generating similar objects. Rather than repeatedly writing the same object structure, we define a function that initializes properties, and JavaScript uses the new
keyword to create a new object instance from it.
In the example below, the Book
function serves as a blueprint for creating multiple book objects:
function Book(title, genre, author) {\\n this.title = title;\\n this.genre = genre;\\n this.author = author;\\n}\\n\\n
For shared behavior, such as a method to display book details, it is more efficient to define the method on the constructor’s prototype. Just like with Object.prototype
, any methods defined on a constructor’s prototype, such as Book.prototype
in this case, are inherited by all instances without each having its own copy:
Book.prototype.bookDetails = function () {\\n return `Name: ${this.author} | Title: ${this.title} | Genre: ${this.genre}.`;\\n};\\n\\n
Now, using the new
keyword, we can create multiple book objects:
const book1 = new Book(\'book_one\', \'sci-fi\', \'Ibas Majid\');\\nconst book2 = new Book(\'book_two\', \'fantasy\', \'Alice M.\');\\n\\n
Both book1
and book2
share the bookDetails
method via Book.prototype
. When you inspect book1
in the console and call book1.bookDetails()
, you’ll see that the method is not a direct property of book1
, but is inherited through the constructor’s prototype:
The prototype chain in this case looks like this:
\\nbook1 → Book.prototype → Object.prototype → null\\n\\n
Just like custom constructor functions, built-in objects also inherit properties and methods through their respective prototypes. For example, arrays inherit from Array.prototype
, and date objects from Date.prototype
. Ultimately, all prototype chains lead back to Object.prototype
, which sits at the top:
dateObj → Date.prototype → Object.prototype → null \\narrayObj → Array.prototype → Object.prototype → null\\n\\n
JavaScript provides essential methods for interacting with an object’s prototype:
\\nObject.getPrototypeOf(obj)
This method retrieves the prototype of an object. It is useful when you want to inspect or confirm the inheritance structure of an object:
\\nfunction Book(...) {\\n // ...\\n}\\nconst book1 = new Book(...);\\nconsole.log(Object.getPrototypeOf(book1)); // Outputs Book.prototype\\n\\n
In the code, we’ve used Object.getPrototypeOf(book1)
to retrieve the prototype, confirming that book1
inherits from Book.prototype
.
Object.setPrototypeOf(obj, proto)
This method allows you to change the prototype of an existing object. The following code defines customProto
with a describe
method and sets it as the prototype of the book1
:
function Book(...) {\\n // ...\\n}\\nconst book1 = new Book(...);\\nconst customProto = {\\n describe() {\\n return `Title: ${this.title}`;\\n },\\n};\\nObject.setPrototypeOf(book1, customProto);\\nconsole.log(book1.describe()); // Outputs: \'Title: book_one\'\\n\\n
Now, book1
inherits from customProto
and can use the describe
method. This operation overrides its original prototype chain.
Object.create(proto)
This method allows you to create a new object and set its prototype explicitly. The following code creates newBook
, which inherits from book1
:
function Book(...) {\\n // ...\\n}\\nconst book1 = new Book(...);\\n// Create a new object that inherits from the book1\\nconst newBook = Object.create(book1);\\nconsole.log(newBook.author); // Outputs: \'Ibas Majid\'\\n\\n
Accessing newBook.author
pulls the value from the book1
via prototype inheritance.
A key strength of prototypes is their ability to create inheritance hierarchies. For instance, suppose we want to reuse features from our Book
constructor in a new constructor called Journal
. Rather than building Journal
from scratch, we can extend Book
so that Journal
inherits its properties and methods.
Since the Book
already includes properties like title
, genre
, and author
, we’ll create a Journal
to inherit these while also adding a new year
property.
The code would look like this:
\\n// Constructor function\\nfunction Book(title, genre, author) {\\n // ...\\n}\\nBook.prototype.bookDetails = function () {\\n // ...\\n};\\nfunction Journal(title, genre, author, year) {\\n Book.call(this, title, genre, author);\\n this.year = year;\\n}\\nconst journal1 = new Journal(\\n \'Journal_one\',\\n \'technology\',\\n \'John Marcus\',\\n \'2020\'\\n);\\n\\n
In this example, the Journal
constructor uses Book.call()
to inherit the title
, genre
, and author
properties from Book
, while introducing its own year
property. This allows the journal1
object to carry over the properties defined in the Book
, making it easy to reuse and extend functionality without duplicating code.
Accessing journal1
in the console returns the expected values. However, if you try to call a method from the parent constructor’s prototype, such as bookDetails()
, it will result in an error:
This happens because while Book.call()
copies the properties, it does not link the Journal
to Book
prototype, meaning methods defined on Book.prototype
are not inherited by default.
To ensure that instances of Journal
can access methods defined on the Book.prototype
, we need to establish a connection between their prototypes. Specifically, we link Journal.prototype
to Book.prototype
, so that methods like bookDetails
become available to all Journal
instances through inheritance.
To set up this prototype chain, use Object.setPrototypeOf()
immediately after defining the Journal
constructor function, but before creating any instances:
// Constructor function\\nfunction Journal(title, genre, author, year) {\\n // ...\\n}\\n// Link Journal.prototype to the Book.prototype\\nObject.setPrototypeOf(Journal.prototype, Book.prototype);\\n// Create an instance of Journal\\nconst journal1 = new Journal(...);\\n\\n
Now, journal1
has access to both its properties and any methods available on the Book.prototype
, like bookDetails
.
To incorporate the year
property in the Journal
, we can override the inherited bookDetails()
method with a customized version. This ensures that Journal
instances display complete details while preserving their prototype connection to Book
.
Add the following code before creating any Journal
instances:
// Override bookDetails to include year\\nJournal.prototype.bookDetails = function () {\\n return `${this.title} - ${this.genre} by ${this.author}, published in ${this.year}`;\\n};\\n\\n
Now, when you call bookDetails()
on both the book1
instance and journal1
instance, each will return the appropriate message based on its properties:
ES6 introduces a more convenient class
syntax for creating constructor functions and setting up prototype chains. However, under the hood, JavaScript still uses the prototype-based inheritance model we explored above.
Here’s how it works, starting with a simple class definition:
\\nclass Book {\\n constructor(...) {\\n // properties assigned here\\n }\\n // other methods here...\\n}\\n\\n
Using the ES6 class
keyword, we define a blueprint for creating object instances. The class can include a constructor
method for property initialization, along with additional methods that are automatically added to the prototype.
Rewriting our earlier Book
constructor with ES6 syntax:
class Book {\\n constructor(title, genre, author) {\\n this.title = title;\\n this.genre = genre;\\n this.author = author;\\n }\\n bookDetails() {\\n return `Name: ${this.author} | Title: ${this.title} | Genre: ${this.genre}.`;\\n }\\n}\\nconst book1 = new Book(\'book_one\', \'sci-fi\', \'Ibas Majid\');\\n\\n
This syntax is more convenient, as methods are automatically added to the prototype—no manual setup is required. You can verify this in your browser’s DevTools.
\\nTo create a subclass from our existing Book
class, we use the extends
keyword. This tells JavaScript that the new child class should inherit properties and methods from the parent class.
Let’s rewrite our traditional prototype-based Journal
constructor using ES6 class syntax. Add the following code after the Book
class definition:
// Book sub class\\nclass Journal extends Book {\\n constructor(title, genre, author, year) {\\n super(title, genre, author);\\n this.year = year;\\n }\\n}\\n// instantiate Journal\\nconst journal1 = new Journal(\\n \'Journal_one\',\\n \'technology\',\\n \'John Marcus\',\\n \'2020\'\\n);\\n\\n
In this example, Journal
extends Book
, automatically inheriting its properties and methods. Within the Journal
constructor, we use super()
to call the parent’s constructor and initialize the title
, genre
, and author
, followed by defining the year
property specific to Journal
.
With this setup, there is no need to manually establish the prototype chain. ES6 class inheritance takes care of it, ensuring that instances of Journal
have access to methods defined on Book.prototype
. You can confirm this using your browser’s DevTools.
Just like in the prototype-based approach, we can override the bookDetails()
method in the Journal
class to include the year
. Here’s how:
class Journal extends Book {\\n // constructor\\n bookDetails() {\\n return `${this.title} - ${this.genre} by ${this.author}, published in ${this.year}`;\\n }\\n}\\n\\n
Now, calling journal1.bookDetails()
will return a message that includes all the properties, including the year.
Prototypes are a core feature of JavaScript, enabling objects to inherit properties and methods from other objects. By understanding the prototype chain, utilizing constructor functions, and leveraging the power of ES6 classes, you can write more efficient and maintainable code. Whether you’re working with plain objects, traditional constructors, or modern class-based syntax, a solid grasp of prototypes is essential for effective JavaScript programming.
\\nIf you found this guide helpful, feel free to share it online. Questions or thoughts? Drop them in the comments. I’d love to hear from you.
\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\ntry...catch
\\n catch
blocks\\n try…catch
block\\n finally
for cleanup\\n Building JavaScript applications involves anticipating and handling unexpected issues. Errors are inevitable, but managing them effectively ensures a better user experience. JavaScript provides the try…catch block as a structured way to handle errors gracefully.
\\nThis article will explore how to use the try…catch block
, covering its basic syntax and advanced scenarios, such as nested blocks, rethrowing errors, and handling asynchronous code.
try...catch
The try...catch
statement consists of three key parts:
try
block — Contains the code that might throw an errorcatch
block — Handles an error if one occurs. It’s only executed when an error is thrownfinally
block — Runs the cleanup code. It’s executed whether an error is thrown or notThe try
block must be followed by either a catch
or finally
block, or both as shown below:
// try...catch\\n try{\\n console.log(\\"executing try block...\\")\\n console.log(missingVar)\\n }catch{\\n console.log(\\"an error occured\\")\\n }\\n \\n // OUTPUT:\\n // executing try block...\\n // an error occured\\n\\n\\n // try...finally\\n try{\\n console.log(\\"executing try block...\\")\\n }finally{\\n console.log(\\"final statement\\")\\n }\\n \\n // OUTPUT:\\n // executing try block...\\n // final statement\\n\\n\\n // try...catch...finally\\n try{\\n console.log(\\"executing try block...\\")\\n console.log(missingVar)\\n }catch(errorVar){\\n console.log(\\"an error occured\\",errorVar)\\n }finally{\\n console.log(\\"final statement\\")\\n }\\n \\n // OUTPUT:\\n // executing try block...\\n // an error occured\\n // final statement\\n\\n
The catch
block has an error identifier that can be used to access the thrown error. You can access it as a whole (e.g, errorVar
) or use its properties individually:
errorVar.name
– Specifies the type of errorerrorVar.message
– Provides a human-readable error descriptionThe code snippet below uses destructuring to access the error thrown:
\\ntry {\\n console.log(missingVar)\\n } catch ({name, message}) {\\n console.log(\\"name: \\", name)\\n console.log(\\"message: \\", message)\\n }\\n \\n // OUTPUT:\\n // name: ReferenceError\\n // message: missingVar is not defined\\n\\n
Sometimes, built-in errors like TypeError
don’t fully capture what went wrong. Throwing custom errors allows you to provide clearer error messages, and include additional debugging information.
To create a custom error, you extend the Error
class, define a constructor that sets a meaningful error message, and assign a custom name. You can optionally include additional debugging information and capture the original stack trace for debugging on development:
class OperationError extends Error {\\n/**\\n* Custom error for handling operation failures.\\n* @param {string} resource - The resource involved in the error.\\n* @param {string} action - The action that failed.\\n*/\\nconstructor(resource, action) {\\n// Construct a meaningful error message\\nsuper(`Failed to ${action} ${resource}. Please check the resource and try again.`);\\n// Preserve the original stack trace (optional, useful for debugging)\\nif (Error.captureStackTrace) {\\nError.captureStackTrace(this, OperationError);\\n}\\nthis.name = \\"OperationError\\";\\n// Custom debugging information\\nthis.resource = resource;\\nthis.action = action;\\n}\\n}\\n\\n
In the code snippet below, the custom error is thrown in the try
block to simulate a function call that may encounter this specific type of error. The error object includes the stack trace and additional error properties:
\\ntry {\\n // simulate an operation that may throw an exception\\n throw new OperationError(\\"file\\", \\"read\\");\\n } catch (error) {\\n console.error(`${error.name}: ${error.message}`);\\n \\n console.log(`additional info:resource was a ${error.resource} and action was ${error.action}`)\\n \\n console.log(error)\\n }\\n \\n // OUTPUT:\\n // OperationError: Failed to read file.Please check the resource and try again.\\n \\n // additional info:resource was a file and action was read\\n \\n // OperationError: Failed to read file.Please check the resource and try again.\\n // at Object.< anonymous > (/Users/walobwa / Desktop /project / test.js: 25: 11)\\n // at Module._compile(node: internal / modules / cjs / loader: 1376: 14)\\n // at Module._extensions..js(node: internal / modules / cjs / loader: 1435: 10)\\n // at Module.load(node: internal / modules / cjs / loader: 1207: 32)\\n // at Module._load(node: internal / modules / cjs / loader: 1023: 12)\\n // at Function.executeUserEntryPoint[as runMain](node: internal/modules/run_main: 135: 12)\\n // at node: internal / main / run_main_module: 28: 49 {\\n // resource: \'file\',\\n // action: \'read\'\\n // }\\n\\n
catch
blocksConditional catch
blocks use the if...else
statement to handle specific errors while allowing unexpected ones to propagate.
Knowing the different types of errors that can be thrown when executing code helps handle them appropriately. Using instanceof
, we can catch specific errors like OperationError
and provide a meaningful message for the error:
try {\\n // simulate an operation that may throw an exception\\n throw new OperationError(\\"file\\", \\"read\\"); \\n } catch (error) {\\n if (error instanceof OperationError) {\\n // handle expected error\\n console.error(\\"Operation Error encountered:\\", error.message);\\n } else {\\n // log unexpected error\\n console.error(\\"Unexpected error encountered:\\", error.message);\\n }\\n }\\n // OUTPUT:\\n // Operation Error encountered: Failed to read file. Please check the resource \\n // and try again.\\n\\n
In the code snippet above, we log any other error in the else
statement. A good practice would be to rethrow errors not explicitly handled in the try...catch
block.
Rethrowing errors ensures that they are propagated up the call stack for handling. This prevents silent failures and maintains the stack trace.
\\nIn the code snippet below, we catch the expected error, OperationError
, silence it, and then defer the handling of other errors by rethrowing. The top-level function will now handle the rethrown error:
try {\\n throw new TypeError(\\"X is not a function\\"); \\n } catch (error) {\\n if (error instanceof OperationError) {\\n console.error(\\"Operation Error encountered:\\", error.message);\\n } else {\\n throw error; // re-throw the error unchanged\\n }\\n }\\n\\n
try…catch
blockA nested try...catch
block is used when an operation inside a try
block requires separate error handling. It helps manage multiple independent failures, ensuring one failure does not disrupt the entire execution flow.
Errors in the inner block are caught and handled locally while the outer block manages unhandled or propagated errors. If the error thrown is handled in the inner try..catch
block, the outer catch block is not executed:
try {\\n try {\\n throw new OperationError(\\"file\\", \\"read\\");\\n } catch (e) {\\n if (e instanceof OperationError) {\\n console.error(\\"Operation Error encountered:\\", e.message);\\n } else {\\n throw e; // re-throw the error unchanged\\n }\\n } finally {\\n console.log(\\"finally inner block\\");\\n }\\n } catch (err) {\\n console.error(\\"outer error log\\", err.message);\\n }\\n // OUTPUT:\\n // Operation Error encountered: Failed to read file. Please check the resource and // try again.\\n // finally inner block\\n\\n
If an error is not handled or is rethrown in the inner block, the outer try...catch
block catches it. The nested finally
block executes before the outer catch
or finally
block, ensuring cleanup at each level:
try {\\n try {\\n throw new TypeError(\\"file\\");\\n } catch (e) {\\n if(e instanceof OperationError) {\\n console.error(\\"Operation Error encountered:\\", e.message);\\n } else {\\n throw e; // re-throw the error unchanged\\n }\\n } finally {\\n console.log(\\"finally inner block\\");\\n }\\n } catch (err) {\\n console.error(\\"outer error log\\", err.message);\\n }\\n // OUTPUT:\\n // finally inner block\\n // outer error log file\\n\\n
try...catch
works with synchronous code. When an error occurs inside an asynchronous function, the try...catch
block completes execution before the error occurs, leaving it unhandled.
Asynchronous operations require proper error handling to prevent unhandled rejections and unexpected failures. Using try...catch
with async/await
helps prevent unhandled rejections from slipping through.
async/await
ensures that the try…catch
block waits for the result of the asynchronous operation before proceeding:
async function openFile(url) {\\n try {\\n const response = await fetch(url);\\n if (!response.ok) {\\n throw new OperationError(\\"file\\", \\"open\\"); // Reusing OperationError for handling file open errors\\n }\\n return await response.json();\\n } catch (error) {\\n console.error(`File fetch failed: ${error.message}`);\\n // Rethrow or handle gracefully\\n throw error; // Propagate the error upward\\n }\\n }\\n\\n
In the example above, the openFile
function is asynchronous. The result of the fetch
operation is awaited. If an error is thrown, it is logged and propagated to the outer try...catch
block where it’s handled:
try {\\n const data = await openFile(\\"data.json\\");\\n console.log(data);\\n } catch (error) {\\n console.error(`Failed to open file: ${error.message}`);\\n }\\n \\n\\n
finally
for cleanupThe finally
block in a try...catch
statement is used to execute code that must run regardless of whether an error occurs. This is useful for cleanup operations such as closing files, releasing resources, or resetting states:
try {\\n // operation that oppens file and throws operaion error\\n throw new OperationError(\\"file\\", \\"read\\");\\n } catch (error) {\\n if(error instanceof OperationError) {\\n console.error(`Operation error: ${error.message}`);\\n } else {\\n throw error;\\n }\\n } finally {\\n closeFile(file); // Ensures the file is closed even if an error occurs\\n }\\n\\n
This tutorial explored error handling in JavaScript using the try...catch
block. We covered its basic syntax, throwing custom errors, rethrowing errors, and using nested blocks. We also discussed handling asynchronous errors with try...catch
and async/await
, as well as using the finally
block for code cleanup.
By effectively using try...catch
, developers can build more robust applications, prevent unexpected crashes, and improve debugging, ensuring a better user experience.
Would you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\n.ts
) file?\\n .tsx
) files?\\n Here’s a quick summary of the differences between a .ts
and .tsx
file extension:
Feature | \\n.ts (TypeScript) | \\n.tsx (TypeScript JSX) | \\n
---|---|---|
Purpose | \\nStandard TypeScript code files | \\nTypeScript files that include JSX syntax | \\n
Use cases | \\nGeneral TypeScript code (logic, data, utilities) | \\nReact components, any code needing JSX rendering | \\n
File content | \\nPure TypeScript code (classes, interfaces, types) | \\nTypeScript code with embedded JSX elements | \\n
Compilation | \\nCompiled to .js files | \\nCompiled to .jsx files after JSX transformation | \\n
React usage | \\nTypically used for non-component code in React projects | \\nEssential for React component files | \\n
Syntax | \\nStandard TypeScript syntax only | \\nTypeScript syntax + JSX syntax | \\n
Example | \\n\\n \\n | \\n\\n \\n | \\n
Type checking | \\nType checks TypeScript code | \\nType checks TypeScript code and JSX elements | \\n
TL;DR; If your file is a React component, then use the .tsx
extension. Otherwise, use the .ts
file type.
In 2012, Microsoft released TypeScript, a programming language that empowers developers to write more robust and maintainable code. Since its introduction, TypeScript’s adoption within the React ecosystem has grown substantially.
\\nHowever, as newer developers enter the TypeScript and React scene, common questions arise regarding type definitions, component interactions, and, notably, the distinctions between .ts
and .tsx
file extensions.
In this article, we‘ll cover the differences between React’s .ts
and .tsx
file extensions in depth. Furthermore, this article will also offer practical code samples to demonstrate where each file type is appropriate.
Here’s what we’ll discuss today:
\\n.ts
) files\\n.tsx
files)\\nFor newer and inexperienced developers, answer the common question: what is the difference between ts
and .tsx
?
.ts
) file?As the name suggests, files with the .ts
extension contain solely TypeScript-related code. These files can contain types, interfaces, classes, and more.
Here’s a little sample of what a .ts
file should look like:
// Function to say hello to a person\\nexport function sayHello(name: string): string {\\n return \\"Hello \\" + name;\\n}\\n//invoke the \\nsayHello(\\"John\\");\\n//since this is TypeScript, we should get an error here.\\n//this is because the function accepts a string, but we\'re passing an integer\\nconsole.log(sayHello(9));\\n\\n
Here’s a brief explanation of the code:
\\nsayHello
, which accepts a string as a parametersayHello
twice; the first time with a string as a parameter and the second time with an integer.Thanks to TypeScript, we expect the compiler to report a type error:
Now that we know the basics of TypeScript files, let’s learn about them in depth!
\\nEnums allow developers to define a set of constants. A great use case for an enum is defining status codes for a certain job. For example: PENDING
, COMPLETE
, or RUNNING
:
enum Status {\\n PENDING = \\"pending\\",\\n COMPLETED = \\"completed\\",\\n RUNNING = \\"running\\",\\n}\\nlet currentStatus: Status = Status.PENDING;\\nconsole.log(\\"current status of job: \\", currentStatus);\\ncurrentStatus = Status.COMPLETED;\\nconsole.log(\\"new status of job: \\", currentStatus);\\n\\n
We first declared an enum called Status
with three variables: PENDING
, COMPLETED
, and RUNNING
. Later, we initialized a variable called currentStatus
and logged out its value to the console.
Let’s test it out! The program should output pending
and completed
to the terminal:
Classes are another concept in the TypeScript world to enable developers to follow the object-oriented programming (OOP) paradigm. This is ideal for situations where the user has to contain and organize business logic in one module.
\\nHere’s a code sample of a class in the TypeScript language:
\\nclass User {\\n name: string;\\n email: string;\\n constructor(name: string) {\\n this.name = name;\\n this.email = `${name.toLowerCase()}@example.com`;\\n }\\n greet(): void {\\n console.log(`Hello, my name is ${this.name}! My email is ${this.email}`);\\n }\\n}\\nconst user1 = new User(\\"Alice\\");\\nuser1.greet();\\n\\n
First, we defined a class called User
with two properties: name
and email
. Additionally, it would also have a method called greet
, which would log out the user’s email and name.
We then initialized an instance of the User
and called it user1
.
Finally, we invoked the greet
function on user1
.
Because of the greet
method, we expect the program to return output Alice\'s
name and email address:
React’s powerful built-in Hooks enable developers to tap into core React features. As a result, this simplifies component logic and enhances reusability.
\\nHowever, in some cases, developers can also build custom Hooks for specific purposes. A great example is building a custom Hook to request data from a server or to track browser storage.
\\nThe code sample below builds a custom React Hook in TypeScript:
\\nimport { useEffect, useState } from \\"react\\";\\n\\nfunction useLocalStorage(key: string, initialValue: string = \\"\\") {\\n//declare two states to store and get local storage\\n const [storedValue, setStoredValue] = useState<string>();\\n const [itemExists, setItemExists] = useState<boolean | undefined>();\\n //on first mount, check if the item with the \'key\' actually exists\\n useEffect(() => {\\n //try to get the value from local storage\\n const item = window.localStorage.getItem(key);\\n //if it exists, set the Hook value to the item\\n if (item) {\\n setStoredValue(JSON.parse(item));\\n setItemExists(true);\\n } else {\\n //otherwise, set the itemExists boolean value to false\\n setItemExists(false);\\n setStoredValue(initialValue);\\n }\\n }, []);\\n//if invoked, manipulate the key in localStorage \\n const setValue = (value: string) => {\\n try {\\n setStoredValue(value);\\n window.localStorage.setItem(key, JSON.stringify(value));\\n } catch (error) {\\n console.error(error);\\n }\\n };\\n //delete item from local storage\\n const deleteItem = () => {\\n window.localStorage.removeItem(key);\\n };\\n return [storedValue, setValue, itemExists, deleteItem] as const;\\n}\\nexport default useLocalStorage;\\n\\n
The code is explained in the comments. Later on in this article, we will use the useLocalStorage
custom Hook in another code example.
Another use case for the .ts
file extension is to write code to handle and retrieve data from an API:
import axios from \\"axios\\";\\n\\nexport type CoffeeType = {\\n title: string;\\n description: string;\\n ingredients: string[];\\n};\\n\\nexport const getCoffees = async () => {\\n try {\\n const response = await axios.get(\\"https://api.sampleapis.com/coffee/hot\\");\\n return response.data as CoffeeType[];\\n } catch (error) {\\n console.error(error);\\n }\\n};\\n\\n
First, we defined a CoffeeType
interface. This will serve as the schema for our API response. Additionally, we’re also exporting it so that we can use it in a React component later on in this article.
We then declared the getCoffees
method, which will execute a GET
request to SampleAPI
Afterward, we returned the API response and told TypeScript that the response would be an array of CoffeeType
objects.
Thanks to TypeScript support, our code editor will automatically detect the return type of the getCoffees
function:
As discussed earlier, TypeScript allows developers to declare schemas and types in their code.
\\nIn some cases, one might want to reuse certain type declarations across multiple components.
\\nHere’s one code sample that does the job:
\\n//declare a CoffeeType schema:\\nexport type CoffeeType = {\\n title: string; //each object will have a \'title \' variable of type \'string\',\\n description: string;//a description of type string,\\n ingredients: string[]; //and an array of strings\\n};\\n//declare another interface:\\nexport interface User {\\n name: string;\\n email: string;\\n id: number;\\n}\\n//valid type:\\nconst latte: CoffeeType = {\\n title: \\"Latte\\",\\n description: \\"A coffee drink made with espresso and steamed milk.\\",\\n ingredients: [\\"espresso\\", \\"steamed milk\\"],\\n};\\n//invalid type: (we\'re missing \'description\' and \'ingredients\' )\\nconst invalidCoffeeLatte: CoffeeType = {\\n title: \\"Latte\\",\\n};\\n\\n
Since invalidCoffeeLatte
is missing certain properties from the CoffeeType
schema, TypeScript will display an error:
Now that we’ve learned about the
.ts
file extension, let’s move on to the .tsx
file type.
.tsx
) files?Unlike the .ts
extension, files that end with .tsx
are for code that contains JSX instructions. In other words, React components live in this file.
The code below demonstrates a minimal TSX React component with a button and a useState
Hook:
import { useState } from \\"react\\";\\n\\nfunction App() {\\n //initialize count state with 0\\n const [count, setCount] = useState(0);\\n //since count is an integer, TypeScript won\'t allow you to set it to a string value\\n return (\\n <>\\n {/* Button to increment count */}\\n <button onClick={() => setCount(count + 1)}>Increment</button>\\n <p>Count value: {count}</p>\\n </>\\n );\\n}\\nexport default App;\\n\\n
The program should render a button and increment the count
variable when pressed:
As you learned above, TSX files are for writing React components. TypeScript allows developers to enforce type safety to component props like so:
\\nimport React from \\"react\\";\\n\\ninterface InterfaceExampleProps {\\n title: string;\\n description: string;\\n}\\nconst InterfaceExample: React.FC<InterfaceExampleProps> = ({\\n title,\\n description,\\n}) => {\\n return (\\n <div>\\n <h1>Title value: {title}</h1>\\n <p>Description value: {description}</p>\\n </div>\\n );\\n};\\nexport default InterfaceExample;\\n\\n
In the code snippet above, we’re defining an interface that tells React that the InterfaceExample
component will require two props of type string
:
Earlier in the article, we declared a custom Hook called useLocalStorage
. In this section, you’ll learn how to use the Hook in a React component.
To use the useLocalStorage
function, write the following code:
const [nameStorage, setNameStorage, exists, deleteItem] =\\n useLocalStorage(\\"name\\");\\n\\n\\n useEffect(() => {\\n if (exists === false) {\\n console.log(\\"does not exists\\");\\n setNameStorage(\\"Google\\");\\n }\\n }, [exists]);\\n return (\\n <>\\n {nameStorage}\\n <button onClick={() => setNameStorage(\\"LogRocket\\")}>\\n Change item to LogRocket\\n </button>\\n <button onClick={() => deleteItem()}>Delete item from storage</button>\\n\\n </>\\n );\\n\\n
First, we used array destructuring to get the nameStorage
, setNameStorage
, exists
, and deleteItem
objects from the useLocalStorage
Hook.
Next, via the useEffect
function, we checked if an item with the name
key exists in local storage. If it doesn’t, React will set the name
item to Google
.
Then, we rendered two buttons to manipulate and delete browser storage.
\\n\\nThis will be the output of the code:
As you can see, React successfully sets and deletes the app’s storage with a button click.
\\nNow that you’ve learned the main differences between the .ts
and .tsx
file types, you’ll learn about common best practices to enhance the React development experience.
Linters are tools that help developers spot stylistic errors. The use of linters can boost code quality and readability.
\\nTools like Vite and Next.js already provide support for linting via the ESLint module.
\\nIf you’re using Vite, this is how you can run linting on your project:
\\nnpm run lint\\n\\n
If your code has issues, ESLint will automatically detect them and report them in the terminal:
TypeScript supports utilities to let developers modify or compose types without rewriting them.
\\nFor example, the Partial
utility can mark all the properties in a type as optional:
//create a type for the User object:\\ntype User = { name?: string; email: string; id: number };\\ntype PartialUser = Partial<User>;\\n\\n
As a result, we expect all properties in PartialUser
to be optional:
Additionally, the Required
Partial will set all properties of our type as required:
type RequiredUser = Required<User>;\\n\\n
As expected, VSCode will set all properties of User
as mandatory:
Similarly, the Omitted
utility can remove certain properties from a type:
type OmittedUser = Omit<User, \\"email\\">; //remove the \'email\' property of User\\n\\n
This is how the OmittedUser
type appears:
Generics provide type safety by ensuring that the type of data is consistent throughout your code. They also improve code reusability by allowing you to write functions and components that can work with different types, without having to write separate implementations for each type.
\\nHere’s a simple code sample that demonstrates generics in action:
\\nfunction outputMessage<T>(message: T): T {\\n return message;\\n}\\nconsole.log(\\"Using string generic: \\", outputMessage(\\"Hello, world!\\"));\\nconsole.log(\\"Using integer generic: \\", outputMessage(42));\\n\\n
In the snippet above, we defined a function called outputMessage
that uses generics. Later on, we invoked this method with a string
and a number
type:
TypeScript also lets developers intersect types via the &
operation. The code sample below extends the Computer
type to create a Laptop
type:
//create base type.\\ntype Computer = {\\n memory: number;\\n processor: string;\\n};\\n//Laptop extends Computer and adds an additional property\\ntype Laptop = Computer & {\\n batterySize: string;\\n};\\n//create an object of type Laptop\\nconst framework13: Laptop = {\\n memory: 16,\\n processor: \\"Ryzen 7\\",\\n batterySize: \\"61Wh\\",\\n};\\nconsole.log(\\"specs of the Framework Laptop are \\\\n\\", framework13);\\n\\n
In large production-ready React apps, programmers need to organize their components to prevent a chaotic file structure. Let’s now explore a recommended project structure for React projects.
\\nWhen it comes to components or pages(.tsx
files), it would be sensible to put them under the components
or pages
directory:
Helper functions, custom Hooks, and other services should go to the utils
, hooks
, and services
folder:
Furthermore, if you want to reuse certain types across your project, it’s better to place them in the types
directory:
Here’s a little summary of use cases for the .ts
and .tsx
file extensions:
Use case | \\nFile extension | \\n
React components | \\n.tsx | \\n
Integration with third-party JSX Libraries | \\n.tsx | \\n
Business logic | \\n.ts | \\n
Type definitions and interfaces | \\n.ts | \\n
Hooks | \\n.ts | \\n
Helper functions | \\n.ts | \\n
In this article, you learned the differences between the .ts
and .tsx
file types, their use cases, and best practices for a React TypeScript project. In my personal and work projects, I’ve been using TypeScript to help me write robust and more stable React code, and so far, it’s been an absolute joy to use.
Thank you so much for reading! Happy coding!
\\n\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nPair programming is a collaborative software development approach where two programmers work together on the same task. One programmer, the “driver,” writes the code, while the other, the “navigator,” reviews the work and provides guidance.
\\nIn practice, it’s best for pairs to switch roles periodically. Since this process can be mentally demanding, shorter, time-boxed sessions often yield better results than long, exhaustive ones.
\\nPair programming offers numerous benefits, including improved problem-solving, enhanced knowledge sharing, and cleaner, more refined code. Collaborating with a partner often leads to better outcomes than working alone.
\\nThis post explores pair programming at a high level and shares insights from real-world experiences with implementing it in professional environments.
\\nCollaboration is essential in modern software development, especially with today’s increasingly distributed teams. Maintaining alignment can be challenging with team members often spread across different cities, states, and even countries. Pair programming serves as a powerful tool to bridge these gaps, fostering stronger collaboration and more cohesive teamwork.
\\nWhen conducting pair programming sessions, developers get the chance to comfortably share knowledge and accomplish tasks in a way that allows work to be done and general learning to occur.
\\nOne caveat to pair programming that I’ve seen is that it always requires a “safe space” for people to work in. When you pair program with team members, the environment needs to be comfortable enough that everyone can feel comfortable speaking and being heard.
\\nIf the environment is even a little bit uncomfortable, let alone hostile, the work will not be done right. All this touches more on team dynamics, which are beyond the scope of this post. In short, a good working environment is essential for success.
\\nI’ve seen several variations of pair programming in my different professional roles throughout my career. More often than not I’ve also seen “mob programming” or “mobbing,” where an entire team works together on a task.
\\nMost of the teams I’ve been on have consisted of one to five people. In that context, mobbing can be managed. I imagine if there was a much larger team, this could get unruly.
\\nMy current team has seen great success with a mixed approach between pair programming and mobbing on tasks.
\\n\\nIn my professional role, I’m a tech lead of a product team that owns an application used nationally by a large company. The product is high visibility and a critical function to the business.
\\nI joined the team after the product matured over the course of a few years and different leadership contingents. Given the importance of the product, the team had become siloed and focused on tasks rather than working together.
\\nThere were several knowledge gaps on the team, due to different amounts of tenure with the product. Given these factors, I started trying to host pair programming sessions — which were really “mobbing,” as I mentioned in the earlier section.
\\nOur team started with each session structured as a time for one of the developers to share what they are working on or get feedback on some specific topic. This was helpful initially because we could fill in gaps in our knowledge, where we previously only had one person who knew about a topic.
\\nFrom these early sessions, we then attempted to “mob” on tasks. We picked cloud-based topics like building infrastructure, or refining configuration for a resource. We even did a session where we evaluated our disaster recovery strategy, which turned out to be a wealth of knowledge for us to grow as a team.
\\nAll of these early sessions started to build the safe space that’s so essential for effective pair programming. This helped individuals gain more skills in areas outside of their expertise and generally made tasks more fun to work on.
\\nOver time, team members also started to have smaller pair programming sessions for specific tasks. As a team, we continue to have the larger group “mobbing” sessions.
\\nThe big takeaway from this work was that focused collaboration made our product and our team better. It’s easy for teams to become siloed and just focus on “keeping the lights on.” However, in that mindset teams never move on to something greater.
\\nIt’s important for teams to do their work, of course. But it’s also essential for them to be aware and receive chances to grow outside of the specific thing that they’re asked to do.
\\nPair programming really helped our team to become more well-refined, and in turn, our product (and future enhancements) have become more robust and made better use of trends and best practices in the industry.
\\nIn my career, I’ve had multiple experiences with pair programming. Here are some of the lessons I’ve learned:
\\nIf you Google “pair programming,” you often see tactics like implementing time constraints and other general rules for productivity. What I have found with smaller endeavors is to cater the sessions to what works best for the developers involved.
\\nIt helps to:
\\nI’ve gone with the stereotypical experience where two developers sit side by side. I’ve also had the opportunity to use a variety of tools to pair programs virtually.
\\nWith the team I lead professionally, I’ve found that simplicity is the best option for pair programming.
\\nThere are a lot of powerful collaboration tools that you can use. However, just talking and doing a video call works in most cases. We use Microsoft Teams, so we’ll generally have a Teams call and then whoever is the “driver” shares their screen. In most cases, this works fine — as most of the time, the people in the “observer” role can articulate the change to be done.
\\nIt’s also helpful to have some form of visualization software available. Most of the time I use draw.io because it is free. I’ve seen teams use many other applications like Miro and other platform-specific tools built into things like Microsoft Teams.
\\nWhen working virtually, you often need a “whiteboard” area to describe concepts. If you’re working in an office, this can obviously be done with a real-life whiteboard. But in virtual collaboration, you’ll need a tool to accomplish the same task.
\\nHere is an example flow chart from an application. This is pretty similar to the format that you would see if my team needed to diagram out a process flow.
\\nTime management is also key to my team’s success with pair programming. Task management software can aid in what you do during your pair programming sessions.
\\nAzure DevOps is more than just a task management tool, it also works effectively for my team. We use the “Boards” component of Azure DevOps to track agile stories that our team works with.
\\nWe usually pick a story from our board and pair on that. Azure DevOps also includes a wiki feature, which is super helpful for maintaining team documentation. If we’re unable to complete a task during a session, someone will document what was done either in the story on our board or in our team’s wiki.
\\nAn example of what a board looks like would include stories in the different swim lanes between things like “in process” vs. “released”:
\\nWe use notes in our stories to pick up where we left off. If the task is something that someone can “take off and run with” afterward, that also works. The team will just hand it off to an individual to finish up. Taking my example board screenshot, here is what a story with some comments indicating progress would look like:
\\nIt‘s also important to have someone who can facilitate the sessions. If you’re working one-on-one, this is easy; it’s just you and your coworker.
\\nIf you’re in a larger group, it helps to have someone who can focus side conversations, and at the very least bring everyone back to the task at hand. The focus is an important part of success with these sessions.
\\nThis doesn’t have to be overly formal. You just need someone who can always bring conversations back to focus. I usually hold this role with my own team. However, different team members can do this at any time.
\\nIt helps to regularly “change hands” during pair programming sessions, i.e. swapping team members between being the “driver” and the “observer.” If one person is always the one doing the actual code changes, it can be exhausting — and they also don’t get the opportunity to analyze what is being done.
\\nMaking sure that everyone feels heard is really important in these sessions. The idea of a “safe space” is crucial to success. It’s vital not to speak over individuals, or make people feel like their ideas are not valued. Everyone needs to feel empowered to discuss and contribute to the sessions.
\\nPair programming sessions can tend to use up a lot of energy. You have a lot to track between the task being done, the conversations about the work, and also being aware of your progress.
\\nIt’s important to be aware of the energy of your team. If you feel like the other person (or team in a “mobbing” session) seems tired, then find a good place to take a break. When you get exhausted you generally do not make good decisions, and it’s really just not fun for anyone.
\\nIn this post, I discussed pair programming and what has worked for my team professionally. I briefly introduced “mobbing” and also talked about the team dynamics that need to take place for success. There are many different process ideas and considerations you need to make for each session to be successful.
\\nEvery team is different, and all of these things will cater to the best environment that works for each team. My team has had a great experience with pair programming and has found that it generally helps not only our productivity but also makes working fun.
\\nI highly recommend trying it out, and seeing if pair programming works for you. Thanks for reading my post!
\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nWhen users interact with an app or website, they expect an immediate response — and in this context, “immediate” means as quick as half a second or even less. Delays can break a user’s flow, leading to frustration, disengagement, and abandonment. But what if interactions felt instant, even when they weren’t? This is where the Doherty Threshold comes into play.
\\nThe Doherty Threshold states that system response time should be 400 milliseconds or less to maintain a user’s flow of thought and engagement. This article explores what the Doherty Threshold is, why it matters in UX design, and how you can apply it to create seamless, high-performing digital experiences.
\\nThe Doherty Threshold suggests that when feedback occurs within this timeframe, users feel more in control and remain engaged. When the delay exceeds this limit, users become frustrated, distracted, or disengaged.
\\nThis principle was first introduced by Walter J. Doherty and Ahrvind J. Thadani in 1982. Their research into the relationship between computer response time and user productivity in those early days of computing revealed that faster system responses resulted in users staying engaged, working more efficiently, and perceiving UX in a more positive light.
\\nDoherty emphasized this principal’s importance by stating,
\\n\\n“Productivity soars when a computer and its users interact at a pace that ensures the user is not kept waiting.”
This insight came from a deep understanding of human psychology — our ability to stay focused and engaged depends on minimal interruption.
\\nThadani further reinforced this by highlighting,
\\n\\n“The human attention span operates within these tiny windows of time. Beyond 400 milliseconds, delays are perceptible, and users lose focus, disrupting their workflow.”
So, if you adhere to this threshold, you can ensure users remain in a state of engagement and efficiency, significantly improving users overall experience.
\\nUsers expect smooth, near-instantaneous interactions when they’re on web and mobile applications. The perception of speed is just as important as actual speed, so you need to find ways to create the illusion of instant responsiveness (like skeleton screens or loading animations), even if the system processing takes longer than 400ms.
\\nAlso, page abandonment rates increase as response time slows. Research from SOASTA (The State of Online Retail Performance) shows that one-second delay in mobile load times can impact conversion rates by up to 20 percent. That’s a significant impact in just a second, showing why it’s important to make websites and apps fast and responsive to keep users from leaving.
\\nThe Doherty Threshold helps UX designers and developers:
\\nWhile designing for the Doherty Threshold can significantly enhance user engagement, several challenges must be considered:
\\nBy acknowledging these limitations, you can create strategies to mitigate delays while still prioritizing user engagement.
\\n\\nTo help you implement the Doherty Threshold, try using the following three strategies:
\\nWhile optimizing actual performance is crucial, you also need to work with developers to reduce the perception of delay through microinteractions, animations, and skeleton screens.
\\nTo do this:
\\nSpeed optimization should be a top priority. While developers handle backend performance, you play a crucial role in ensuring the UI is efficient and lightweight.
\\nMake sure that you:
\\nUsers expect an immediate response to their actions, even if the system needs more time to process them. This means designing interactive feedback mechanisms to keep users informed.
\\nYou can do this by incorporating:
\\nNow, to help you better understand the Doherty Threshold in action, this section outlines real-world examples of its use by successful companies.
\\nGoogle’s search engine starts retrieving potential results as soon as users begin typing, ensuring that once they hit “Enter,” the results appear almost instantaneously. This preemptive approach maintains the perception of a fast and efficient system:
\\n
\\n
TikTok preloads the next video in the background, ensuring near-instant playback as users scroll. This seamless experience prevents frustration and keeps users immersed:
\\nFace ID’s smooth animations and instant visual feedback make authentication feel immediate, even while complex processing happens in the background. This reduces perceived wait time and keeps users engaged:
\\nThe Doherty Threshold in UX reminds you that quick, seamless feedback keeps users engaged and in control. To harness this principle effectively, consider these points:
\\nBy thoughtfully applying these strategies, you’ll not only meet user expectations for speed but also you foster an engaging experience.
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\ndocker exec
\\n exec
command\\n exec
errors\\n docker exec
or docker
attach
?\\n Docker’s exec
command lets you run shell commands directly inside your running containers without forcing you to restart them. It’s useful when you want to debug an error, adjust settings, or quickly peek into a container’s environment. This makes your development workflow smoother, saving you from unnecessary downtime.
Docker containers have changed how we deploy apps by making it possible to deploy across multiple architectures with a one-fits-all approach. With Docker, you can deploy anything, including CLI tools, across different environments without needing to worry about any differences in the underlying infrastructure.
\\nWhen you run containers, you might need to interact with them to troubleshoot, run additional commands, or modify configurations in real-time. In this article, you’ll learn how to interact with running Docker containers with the exec
command.
docker exec
Before exploring the command, you’ll need a running container, so let’s get you started.
\\nFirst, if you don’t already have Docker installed, you should download and set it up; you’ll need it to execute the commands.
Once you’re all set up, execute this command to run the Ubuntu container via the official image:
\\ndocker run -d --name my_ubuntu_container ubuntu sleep infinity\\n\\n
What’s going on here?
\\nd
— Runs the container in the background (detached mode)--name my_ubuntu_container
— Assigns the container a name for referencesubuntu
— Specifies the Ubuntu image (Docker will pull it if it’s unavailable locally)sleep infinity
— Keeps the container running indefinitely:\\n
You can verify the container is running with the
ps
command like this:
docker ps\\n\\n
The ps
command lists all running containers. If the container runs, it should appear in the list with your assigned name:
Now that you’re set up with Docker and have a running container, you can interact with it using the exec
command.
exec
commandDocker ships with the exec
command, which allows you to execute commands in running containers without restarting them. The exec
command is useful when you want to start an interactive shell, inspect logs, modify files, or perform various other tasks.
The general syntax for the docker exec
command is:
docker exec [OPTIONS] CONTAINER COMMAND [ARG...]\\n\\n
Here are the options you can set with the exec
command:
Option | \\nFunction | \\nDescription | \\n
---|---|---|
-d , --detach | \\nRun in background | \\nRuns the command in detached mode | \\n
--detach-keys string | \\nOverride detach keys | \\nSets a custom key sequence for detaching a container | \\n
-e , --env list | \\nSet environment variables | \\nPasses environment variables to the command | \\n
--env-file list | \\nLoad env file | \\nLoads environment variables from a specified file | \\n
-i , --interactive | \\nKeep input open | \\nKeeps standard input open for interaction | \\n
--privileged | \\nGrant extended privileges | \\nProvides additional permissions for running commands | \\n
-t , --tty | \\nAllocate pseudo-TTY | \\nProvides an interactive shell interface | \\n
-u , --user string | \\nRun as specific user | \\nExecutes the command as a specified user | \\n
-w , --workdir string | \\nSet working directory | \\nSets the working directory inside the container | \\n
You’ll follow up the specified option with the command and arguments you’re executing in the container. Let’s execute a command to list all the files in the container’s directory:
\\ndocker exec my_ubuntu_container ls -l\\n\\n
The command should output the files and directories in long format like this:
\\nIn this case, I’ve used the container ID to reference the container; it works like using the container name. You can add the -it
flag to run an interactive shell inside the container:
docker exec -it my_container /bin/bash\\n\\n
This command opens a Bash shell running in the container. Now, you can run commands directly without the exec
command:
You can exit the interactive mode with the exit
command that ships with the operating system.
\\nBy default, docker exec
executes commands as the root user inside the container. You can specify a different user with the -u
option.
First, you need to create a new user in the container’s OS. Execute this command to create a new user named paul
:
docker exec my_ubuntu_container useradd -m paul\\n\\n
Now, you can specify the user before the commands to execute commands for a user:
\\ndocker exec -u paul my_ubuntu_container whoami\\n\\n
This command should output the username of the user like this:
\\nYou can pass environment variables and read existing environment variables with the -e
option:
docker exec -e MY_VAR=Hello my_ubuntu_container printenv MY_VAR\\n\\n
Executing this command should output “Hello” since it’s the value for the MY_VAR
environment variable. You’ll likely need to run commands in detached mode (in the background). That’s where the -d
option is handy:
docker exec -d my_ubuntu_container touch /tmp/detached_file\\n\\n
This command creates an empty file inside the /tmp
directory without keeping an interactive session open. You can also set a working directory with the -w
option like this:
docker exec -w /var/log my_ubuntu_container ls\\n\\n
The command sets the working directory to /var/log
first before listing all the files in the working directory. If you’re stuck in development, you can always use the --help
flag to browse through all the options further:
exec
errorsYou may encounter some errors when using Docker’s exec
command to interact with running containers. Let’s overview some of them and how you can fix them.
You’ll need to make sure you’re referencing an existing container. Run docker ps
to view all running containers, copy the reference and execute the command with the right reference.
You may experience this when you try executing a command as a non-root user without the neccessary privildges. You can either run the command as root, or use the --privileged
flag for extra permissions.
In this case, the container probably isn’t running. Execute docker start <container name>
to start the container if this is the case.
You can always visit the Docker Community Forum if you have different errors or none of these fixes work for you.
\\ndocker exec
or docker
attach
?Docker’s exec
and attach
commands are handy for interacting with running containers but they’re built for different purposes.
Docker’s attach
command helps with attaching local standard input, output, and error streams to running containers:
Feature | \\nDocker exec | \\nDocker attach | \\n
---|---|---|
Starts a new process inside the container | \\n✅ | \\n❌ | \\n
Attaches to an existing running process | \\n❌ | \\n✅ | \\n
Supports multiple sessions at the same time | \\n✅ | \\n❌ | \\n
Can run interactive or non-interactive commands | \\n✅ | \\n❌ | \\n
You’ll use docker exec
when you need to run a separate command inside a running container and docker attach
if you need to interact with the primary process running inside the container.
In this article, you’ve learned how to interact with your running containers using docker
exec
. The docker exec
command allows you to execute commands inside a running container, whether interactively, with environment variables, or as a different user, without restarting the container. You also learned how to troubleshoot common errors and the difference between exec
and docker attach
, which connects to an already running process.
Hey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nfilter()
method\\n filter()
method works\\n array
filter()
method\\n filter()
with other array methods\\n filter()
with map()
\\n filter()
with reduce()
\\n The array filter()
method does exactly what its name suggests; it filters an array based on a given condition, returning a new array that contains only the elements that meet the specified condition.
In this article, we will learn how the array filter()
method works, practical use cases, advanced techniques, and best practices to help you write efficient filtering logic.
filter()
methodThe array filter()
method is defined using this format:
const newArray = originalArray.filter(callbackFunction(element,index, array),thisArg);\\n\\n
The method takes in the following parameters and arguments:
\\ncallbackFunction
— Defines the rule for filtering. It is invoked once for each element in the array:\\nelement
— The current element being processedindex
(optional) — The index of the current element. It starts counting from 0array
(optional) — The original array to be filteredcallbackFunction
returns a Boolean value. If true
, the element is included in the new arraythisArg
(optional) — A value to use as this
inside the callbackFunction
. This parameter is ignored if the callback is an arrow function, as arrow functions don’t have this
bindingfilter()
method returns a new array containing only the elements that satisfy the filtering conditionNote: If no element passes the condition, it will return an empty array.
\\nfilter()
method worksTo understand the step-by-step process of how the array filter()
method works, let’s look at an example scenario without the filter method, vs. with the filter method.
Assume that we have an array of words and only want to get the words longer than five characters from the array. Without filter()
, we would traditionally use a for
loop like this:
With the for
loop, we had to take an extra step by manually iterating through the array and adding each matching value individually.
Now, let’s simplify this process using the filter()
method:
The filter()
method itself is an iterative method, so it does not require the additional step of using a for
loop, and based on the result from the callback function, each element from the array is automatically added to a new array.
It’s important to note that this method is designed specifically for arrays. To use it with other JavaScript data types, they must first be converted into an array.
\\narray
filter()
methodThis method can be implemented in solving logic problems in real-world applications and solutions. Here are some examples that are most commonly used:
\\nLet’s consider an example where we want to filter out a list of product prices to find those above a certain threshold.
\\nTo solve this effectively, we can use an arrow function to define the filtering condition concisely inside the filter method:
\\nconst productPrices = [15.99, 25.50, 10.00, 30.75, 5.49, 22.00];\\nconst expensiveProducts = productPrices.filter(price => price > 20);\\n\\nconsole.log(expensiveProducts)\\n// Result: [25.5, 30.75, 22]\\n\\n
One of the most common use cases for the filter()
method is implementing a search feature. It can be used to filter a list of names based on a search query, returning only the items that match.
To ensure case-insensitive matching, it’s best to convert both the search query and the array elements to lowercase before filtering:
\\nconst names = [\'John\', \'Alice\', \'Jonathan\', \'Bob\', \'Joanna\'];\\nconst searchQuery = \'jo\';\\nconst searchResults = names.filter(name => \\n name.toLowerCase().includes(searchQuery.toLowerCase())\\n);\\n\\nconsole.log(searchResults)\\n// Result: [\'John\', \'Jonathan\', \'Joanna\']\\n\\n
The filter()
method can be used on arrays of objects, allowing you to filter them based on specific property values.
Let’s consider a content platform where each post is represented as an object with properties like id
, title
, and tags
. If you want to display only the posts that are tagged as \\"tech\\"
, you can use the filter()
method to extract only those posts:
const posts = [\\n { id: 1, title: \'AI Breakthrough\', tags: [\'tech\', \'science\']},\\n { id: 2, title: \'Travel Guide\', tags: [\'lifestyle\'] },\\n { id: 3, title: \'New JavaScript Framework\', tags: [\'tech\'] },\\n];\\nconst techPosts = posts.filter(post => post.tags.includes(\'tech\'))\\n\\nconsole.log(techPosts)\\n// Result: [\\n// { id: 1, title: \'AI Breakthrough\', tags: [ \'tech\', \'science\' ] },\\n// { id: 3, title: \'New JavaScript Framework\', tags: [ \'tech\' ] }\\n// ]\\n\\n
When working with raw data, it’s common to encounter invalid values or edge cases that must be removed before further processing. Instead of manually checking and cleaning the data, we can use the filter()
method to do this efficiently:
const rawData = [\\"Hello\\", \\"\\", 42, null, undefined, \\"JavaScript\\"];\\nconst cleanData = rawData.filter(item => item !== null && item !== undefined && item !== \'\');\\n\\nconsole.log(cleanData);\\n//Result: [ \'Hello\', 42, \'JavaScript\' ]\\n\\n
A simpler alternative to filter out unwanted values is using Boolean
as the callback function. This removes all falsy values, including null
, undefined
, false
, \\"\\"
(empty strings), and NaN
(Not a Number):
const cleanData = rawData.filter(Boolean);\\n\\n
When working with arrays, duplicates can often exist. Whether it’s from user input or an external source, we need a way to remove them efficiently. This can be done by combining the filter()
method with the indexOf()
method:
const duplicates = [1, 2, 2, 3, 4, 4, 5];\\nconst uniqueValues = duplicates.filter((item, index, arr) => \\n arr.indexOf(item) === index\\n//Compares the first occurrence index with the current index\\n);\\n\\nconsole.log(uniqueValues)\\n// Result: [1, 2, 3, 4, 5]\\n\\n
indexOf(item)
finds the first index of the element in the array. By comparing the current index to this first index, we ensure only the first occurrence of each item is kept in the array.
This method works well for arrays of primitive values (numbers, strings). It doesn’t handle deduplication for objects, as each object reference is unique.
\\nfilter()
with other array methodsNow that we have a solid understanding of how the filter()
method works, we can take it a step further by combining it with other array methods like map()
and reduce()
using chaining.
Chaining in JavaScript is a method that allows multiple functions to be linked together, with each function passing its output as the input to the next. This works because many JavaScript methods return an object, enabling consecutive method calls in a seamless flow.
\\nBy chaining array methods, we can efficiently perform complex transformations while keeping the code concise and readable.
\\nBefore diving into chaining, let’s quickly review how these methods work individually:
\\nfilter()
– Narrows down the array by selecting only elements that meet a specific conditionmap()
– Transforms each element in an array and returns a new array with modified valuesreduce()
– Aggregates array values into a single result (such as sum, average, count)Let’s explore how these methods can be combined for advanced use cases.
\\nfilter()
with map()
A common use case for chaining filter()
and map()
is refining API response data. Often, the responses contain irregularities, such as empty values or invalid entries. By combining these methods, we can filter out invalid or unwanted data and transform the remaining data into a structured format.
In a content management system (CMS), it is common to retrieve a list of articles via an API that may include entries with missing titles or those still in draft status. To ensure that only published articles with valid titles are displayed to users, we can chain these methods:
\\nconst dummyData = [\\n { id: 1, title: \' JavaScript Guide \', status: \'DRAFT\' },\\n { id: 2, title: \'React Basics\', status: \'PUBLISHED\' },\\n { id: 3, title: \' \', status: \'PUBLISHED\' }, // Invalid title\\n];\\n\\nconst publishedArticles = dummyData\\n .filter(item => item.title.trim() !== \'\' && item.status === \'PUBLISHED\') \\n// Remove empty titles and drafts\\n .map(item => ({\\n ...item, // Copy all values from the original array\\n title: item.title.trim().toLowerCase(), // Normalize title\\n }));\\n\\nconsole.log(publishedArticles);\\n// Result: [{ id: 2, title: \'react basics\', status: \'PUBLISHED\' }]\\n\\n
filter()
with reduce()
These array methods can be useful when we need to filter specific elements and then accumulate the result.
\\nFor example, consider an ecommerce platform where we have a list of products, but not all of them are currently in stock.
\\nWe want to calculate the total value of all available products by filtering out out-of-stock items and adding up the prices of the remaining products:
\\nconst products = [\\n { name: \'Laptop\', price: 1000, inStock: true },\\n { name: \'Phone\', price: 500, inStock: false },\\n { name: \'Tablet\', price: 750, inStock: true },\\n]\\n\\nconst totalStockValue = products\\n .filter((product) => product.inStock) // Keep only in-stock products\\n .reduce((sum, product) => sum + product.price, 0) // Add up prices\\n\\nconsole.log(totalStockValue)\\n// Result: 1750\\n\\n
While the filter()
method is incredibly simple to use, following the few best practices listed below can help ensure your code is both efficient and easy to maintain.
Keep the logic inside your callback simple by avoiding heavy computations or complex calculations. Precompute values when possible, and use built-in functions for better performance.
\\nThis keeps your code concise, readable, and reduces boilerplate.
\\n\\nIt’s good to chain methods like filter()
, map()
, and reduce()
since it avoids reusing intermediate variables. However, deep nesting might become too unreadable. You could consider splitting them into name-oriented functions such that your code will be easier to read.
When transforming objects, be cautious not to mistakenly mutate your data. Use the spread operator ...
to create new objects, ensuring your original data remains unchanged.
For very large datasets, for-loops can sometimes outperform filter()
. Although filter()
is concise and readable, in performance-sensitive contexts, an optimized loop might be the better choice.
In this article, we‘ve discussed the array filter()
method, from its basic syntax and use cases to more advanced techniques like chaining with map()
and reduce()
.
By understanding and applying the best practices outlined here, you can write cleaner, more efficient, and more maintainable code.
\\nI hope this tutorial was useful to you! If you have any questions, feel free to reach out to me on X. Happy coding!
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nThe proper handling of JavaScript closures is essential to any JavaScript project.
\\nIn React projects specifically, closures can manifest themselves in ways that are not always readily apparent.
\\nIn this article, I will explain what closures are and provide examples of how to manage them. We’ll also cover a real-life example that I handled with my professional job and the production application we support.
\\nI’ll be referencing my sample project on GitHub throughout the article.
\\nWhat are JavaScript closures?
\\nA JavaScript closure is the relationship between a JavaScript function and references to its surrounding state. In JavaScript, state values have “scope” — which defines how accessible a value is. The more general concept of reference access is also called “lexical scope.” There are three main levels of scope in JavaScript:
\\n{
and }
Here is an example of scope in code:
\\n// Global Scope\\nlet globalValue = \\"available anywhere\\";\\n\\n// Function Scope\\nfunction yourFunction() { \\n // var1 and var2 are only accessible in this function\\n let var1 = \\"hello\\";\\n let var2 = \\"world\\";\\n\\n console.log(var1);\\n console.log(var2);\\n}\\n\\n// Block Scope\\nif(globalValue = \\"available anywhere\\") {\\n // variables defined here are only accssible inside this conditional\\n let b1 = \\"block 1\\";\\n let b2 = \\"block 2\\";\\n}\\n
\\nIn the example code above:
\\nglobalValue
— Can be reached anywhere in the programvar1
and var2
— Can only be reached inside yourFunction
b1
and b2
— Can only be accessed when globalValue
= “available anywhere”Closures happen when you make variables available inside or outside of their normal scope. This can be seen in the following example:
\\nfunction start() {\\n // variable created inside function\\n const firstName = \\"John\\";\\n\\n // function inside the start function which has access to firstName\\n function displayFirstName() {\\n // displayFirstName creates a closure\\n console.log(firstName);\\n }\\n // should print \\"John\\" to the console\\n displayName();\\n}\\nstart();\\n
\\nIn JavaScript projects, closures can cause issues where some values are accessible and others are not. When working with React specifically, this often happens when handling events or local state within components.
\\nIf you’d like a more in-depth review of closures in general, I recommend checking out our article on JavaScript closures, higher-order functions, and currying.
\\nClosures in React
\\nReact projects usually encounter closure issues with managing state. In React applications, you can manage state local to a component with useState
. You can also leverage tools for centralized state management like Redux, or React Context for state management that goes across multiple components in a project.
Controlling the state of a component or multiple components requires the understanding of what values are accessible and where. When managing state in a React project, you may encounter frustrating closure issues where inconsistent changes can occur.
\\n\\nTo better explain the concepts of closures in React, I’ll show an example using the built-in setTimeout
function. After that example in the following section, I will cover a real world production issue I had to resolve with closures. In all of these examples, you can follow along with my sample project.
Consider an application that takes in an input and does an async action. Usually you would see this with a form, or something that would take in client inputs and then pass them over to an API to do something. We can simplify this with a setTimeout
in a component like the following:
const SetTimeoutIssue = () => {\\n const [count, setCount] = useState(0);\\n const handleClick = () => {\\n setCount(count + 1);\\n // This will always show the value of count at the time the timeout was set\\n setTimeout(() => {\\n console.log(\'Current count (Issue):\', count);\\n alert(`Current count (Issue): ${count}`);\\n }, 2000);\\n };\\n return (\\n <div className=\\"p-4 bg-black rounded shadow\\">\\n <h2 className=\\"text-xl font-bold mb-4\\">setTimeout Issue</h2>\\n <p className=\\"mb-4\\">Current count: {count}</p>\\n <button\\n onClick={handleClick}\\n className=\\"bg-blue-500 text-white px-4 py-2 rounded hover:bg-blue-600\\"\\n >\\n Increment and Check After 2s\\n </button>\\n <div className=\\"mt-4 p-4 bg-gray-100 rounded\\">\\n <p className=\\"text-black\\">\\n Expected: Alert shows the updated count\\n </p>\\n <p className=\\"text-black\\">\\n Actual: Alert shows the count from when setTimeout was\\n called\\n </p>\\n </div>\\n </div>\\n );\\n};\\n
\\nThis looks like something that should not have issues. The user clicks a button and a counter value is incremented and then shown in an alert modal. Where the issue happens is:
\\n const handleClick = () => {\\n setCount(count + 1);\\n // This will always show the value of count at the time the timeout was set\\n setTimeout(() => {\\n console.log(\'Current count (Issue):\', count);\\n alert(`Current count (Issue): ${count}`);\\n }, 2000);\\n };\\n
\\nThe count
value is captured by the setTimeout
function call in a closure. If you took this example and attempted to click the button multiple times in rapid succession you would see something like this:
In that screenshot, the Current Count: 1 indicates that the count
value is actually “1.” Since the setTimeout
created a closure and locked the value to the initial 0, the modal shows 0.
To resolve this issue, we can use the useRef
Hook to create a reference that always has the latest value across re-renders. With React state management, issues can occur where a re-render pulls data from a previous state.
If you just use useState
Hooks without a lot of complexity, you generally can get away with the standard getting and setting state. However, closures in particular data can have issues persisting as updates occur. Consider a refactor of our original component like the following:
const SetTimeoutSolution = () => {\\n const [count, setCount] = useState(0);\\n const countRef = useRef(count);\\n // Keep the ref in sync with the state\\n countRef.current = count;\\n const handleClickWithRef = () => {\\n setCount(count + 1);\\n // Using ref to get the latest value\\n setTimeout(() => {\\n console.log(\'Current count (Solution with Ref):\', countRef.current);\\n alert(`Current count (Solution with Ref): ${countRef.current}`);\\n }, 2000);\\n };\\n return (\\n <div className=\\"p-4 bg-black rounded shadow\\">\\n <h2 className=\\"text-xl font-bold mb-4\\">setTimeout Solution</h2>\\n <p className=\\"mb-4\\">Current count: {count}</p>\\n <div className=\\"space-y-4\\">\\n <div>\\n <button\\n onClick={handleClickWithRef}\\n className=\\"bg-green-500 text-black px-4 py-2 rounded hover:bg-green-600\\"\\n >\\n Increment and Check After 2s\\n </button>\\n <div className=\\"mt-4 p-4 bg-gray-100 rounded\\">\\n <p className=\\"text-black\\">\\n Expected: Alert shows the updated count\\n </p>\\n <p className=\\"text-black\\">\\n Actual: Alert shows the updated count\\n </p>\\n </div>\\n </div>\\n </div>\\n </div>\\n );\\n};\\n
\\nThe difference in the code from the original issue is:
\\n const [count, setCount] = useState(0);\\n const countRef = useRef(count);\\n // Keep the ref in sync with the state\\n countRef.current = count;\\n\\n const handleClickWithRef = () => {\\n setCount(count + 1);\\n // Using ref to get the latest value\\n setTimeout(() => {\\n console.log(\'Current count (Solution with Ref):\', countRef.current);\\n alert(`Current count (Solution with Ref): ${countRef.current}`);\\n }, 2000);\\n };\\n
\\nYou’ll notice that we are using the countRef
value, which references the actual state value for count
. The reference persists across re-renders and thus resolves this closure issue. If you’d like more information on useRef, I recommend reviewing the LogRocket’s guide to React Refs.
A real-world example of JavaScript closures: SignalR reference leaks in callbacks
\\nIn my professional role, I am a tech lead of a product team that manages an application used nationally by my company. This application handles real-time updates of data that reside in different queues. These queues are shown visually on a page with multiple tabs (one tab per queue). The page will receive messages from Azure’s SignalR service when the data is changed by backend processes. The messages received indicate how to either update the data or move it to a different queue.
\\nMy team encountered an issue where this whole process was generating multiple errors. Basically, some updates seemed to be occurring correctly, while others were missed or incorrect. This was very frustrating for our users. It was also very difficult to debug as the SignalR service operates in real time, and requires triggering messages to be sent from the server to the client.
\\nInitially, I thought that this had to be something on our backend. I walked through the backend processes that generate the SignalR messages with the devs on my team. When it became apparent that the messages were being sent correctly, I switched over to looking at the frontend project.
\\nIn a deep dive of the code, I found that the issue was basically a closure problem. We were using the SignalR client package from Microsoft, and the event handler that was receiving the messages was incorrectly acting on old state.
\\n\\nFor the solution to my problem, I refactored the message handler and also used the useRef
hook that I had mentioned before. If you’re following along on my sample project, I’m referring to the SignalRIssue
and SignalRSolution
components.
Consider the original SignalRIssue component:
\\nimport React, { useState, useEffect } from \'react\';\\nimport { ValueLocation, MoveMessage } from \'../types/message\';\\nimport { createMockHub, createInitialValues } from \'../utils/mockHub\';\\nimport ValueList from \'./ValueList\';\\nimport MessageDisplay from \'./MessageDisplay\';\\n\\nconst SignalRIssue: React.FC = () => {\\n const [tabAValues, setTabAValues] = useState<ValueLocation[]>(() =>\\n createInitialValues()\\n );\\n const [tabBValues, setTabBValues] = useState<ValueLocation[]>([]);\\n const [activeTab, setActiveTab] = useState<\'A\' | \'B\'>(\'A\');\\n const [lastMove, setLastMove] = useState<MoveMessage | null>(null);\\n useEffect(() => {\\n const hub = createMockHub();\\n hub.on(\'message\', (data: MoveMessage) => {\\n // The closure captures these initial arrays and will always reference\\n // their initial values throughout the component\'s lifecycle\\n if (data.targetTab === \'A\') {\\n // Remove from B (but using stale B state)\\n setTabBValues(tabBValues.filter((v) => v.value !== data.value));\\n // Add to A (but using stale A state)\\n setTabAValues([\\n ...tabAValues,\\n {\\n tab: \'A\',\\n value: data.value,\\n },\\n ]);\\n } else {\\n // Remove from A (but using stale A state)\\n setTabAValues(tabAValues.filter((v) => v.value !== data.value));\\n // Add to B (but using stale B state)\\n setTabBValues([\\n ...tabBValues,\\n {\\n tab: \'B\',\\n value: data.value,\\n },\\n ]);\\n }\\n setLastMove(data);\\n });\\n hub.start();\\n return () => {\\n hub.stop();\\n };\\n }, []); // Empty dependency array creates the closure issue\\n\\n return (\\n <div className=\\"p-4 bg-black rounded shadow\\">\\n <h2 className=\\"text-xl font-bold mb-4\\">SignalR Issue</h2>\\n <div className=\\"min-h-screen w-full flex items-center justify-center py-8\\">\\n <div className=\\"max-w-2xl w-full mx-4\\">\\n <div className=\\"bg-gray-800 rounded-lg shadow-xl overflow-hidden\\">\\n <MessageDisplay message={lastMove} />\\n <div className=\\"border-b border-gray-700\\">\\n <div className=\\"flex\\">\\n <button\\n onClick={() => setActiveTab(\'A\')}\\n className={`px-6 py-3 text-sm font-medium flex-1 ${\\n activeTab === \'A\'\\n ? \'border-b-2 border-purple-500 text-purple-400 bg-purple-900/20\'\\n : \'text-gray-400 hover:text-purple-300 hover:bg-purple-900/10\'\\n }`}\\n >\\n Tab A ({tabAValues.length})\\n </button>\\n <button\\n onClick={() => setActiveTab(\'B\')}\\n className={`px-6 py-3 text-sm font-medium flex-1 ${\\n activeTab === \'B\'\\n ? \'border-b-2 border-emerald-500 text-emerald-400 bg-emerald-900/20\'\\n : \'text-gray-400 hover:text-emerald-300 hover:bg-emerald-900/10\'\\n }`}\\n >\\n Tab B ({tabBValues.length})\\n </button>\\n </div>\\n </div>\\n {activeTab === \'A\' ? (\\n <ValueList values={tabAValues} tab={activeTab} />\\n ) : (\\n <ValueList values={tabBValues} tab={activeTab} />\\n )}\\n </div>\\n <div className=\\"mt-4 p-4 bg-yellow-900 rounded-lg border border-yellow-700\\">\\n <h3 className=\\"text-sm font-medium text-yellow-300\\">\\n Issue Explained\\n </h3>\\n <p className=\\"mt-2 text-sm text-yellow-200\\">\\n This component demonstrates the closure issue where\\n the event handler captures the initial state values\\n and doesn\'t see updates. Watch as values may\\n duplicate or disappear due to stale state\\n references.\\n </p>\\n </div>\\n </div>\\n </div>\\n </div>\\n );\\n};\\nexport default SignalRIssue;\\n
\\nThe component basically loads, connects to a hub (here I’ve created a mock version of the SignalR connection) and then acts when messages are received. In my mocked SignalR client, I have it using setInterval
and randomly moving values from one tab to another:
import { MoveMessage, ValueLocation } from \'../types/message\';\\nexport const createInitialValues = (): ValueLocation[] => {\\n return Array.from({ length: 5 }, (_, index) => ({\\n value: index + 1,\\n tab: \'A\',\\n }));\\n};\\nexport const createMockHub = () => {\\n return {\\n on: (eventName: string, callback: (data: MoveMessage) => void) => {\\n // Simulate value movements every 2 seconds\\n const interval = setInterval(() => {\\n // Randomly select a value (1-5) and a target tab\\n const value = Math.floor(Math.random() * 5) + 1;\\n const targetTab = Math.random() > 0.5 ? \'A\' : \'B\';\\n callback({\\n type: \'move\',\\n value,\\n targetTab,\\n timestamp: Date.now(),\\n });\\n }, 2000);\\n return () => clearInterval(interval);\\n },\\n start: () => Promise.resolve(),\\n stop: () => Promise.resolve(),\\n };\\n};\\n
\\nIf you ran my sample component, you would see odd behavior like this:
\\nThere should only be one occurrence of Value1
and Value5
in that list. Instead, there are multiple, and it looks like nothing is being moved over to Tab B.
Looking at the code, you can see the closure issue here:
\\n hub.on(\'message\', (data: MoveMessage) => {\\n // The closure captures these initial arrays and will always reference\\n // their initial values throughout the component\'s lifecycle\\n if (data.targetTab === \'A\') {\\n // Remove from B (but using stale B state)\\n setTabBValues(tabBValues.filter((v) => v.value !== data.value));\\n // Add to A (but using stale A state)\\n setTabAValues([\\n ...tabAValues,\\n {\\n tab: \'A\',\\n value: data.value,\\n },\\n ]);\\n } else {\\n // Remove from A (but using stale A state)\\n setTabAValues(tabAValues.filter((v) => v.value !== data.value));\\n // Add to B (but using stale B state)\\n setTabBValues([\\n ...tabBValues,\\n {\\n tab: \'B\',\\n value: data.value,\\n },\\n ]);\\n }\\n
\\nThe message handler is operating directly on the stale state when updating values. When the handler receives the messages, it’s operating on a point in the state change that is older vs. the actual value that should persist across re-renders.
\\nTo resolve this situation, you can do what I did in the setTimeout
example and go back to the useRef
Hook:
const [tabAValues, setTabAValues] = useState<ValueLocation[]>(() =>\\n createInitialValues()\\n );\\n const [tabBValues, setTabBValues] = useState<ValueLocation[]>([]);\\n const [activeTab, setActiveTab] = useState<\'A\' | \'B\'>(\'A\');\\n const [lastMove, setLastMove] = useState<MoveMessage | null>(null);\\n\\n // Create refs to maintain latest state values\\n const tabAValuesRef = useRef(tabAValues);\\n const tabBValuesRef = useRef(tabBValues);\\n\\n // Keep refs in sync with current state\\n tabAValuesRef.current = tabAValues;\\n tabBValuesRef.current = tabBValues;\\n
\\nThen in the message handler, you look for values from the reference vs. a stale read of the components state by looking at the .current
values:
useEffect(() => {\\n const hub = createMockHub();\\n hub.on(\'message\', (data: MoveMessage) => {\\n // Use refs to access current state values\\n const valueInA = tabAValuesRef.current.find(\\n (v) => v.value === data.value\\n );\\n if (data.targetTab === \'A\') {\\n if (!valueInA) {\\n // Value should move to A\\n const valueInB = tabBValuesRef.current.find(\\n (v) => v.value === data.value\\n );\\n if (valueInB) {\\n // Use functional updates to ensure clean state transitions\\n setTabBValues((prev) =>\\n prev.filter((v) => v.value !== data.value)\\n );\\n setTabAValues((prev) => [\\n ...prev,\\n {\\n tab: \'A\',\\n value: data.value,\\n },\\n ]);\\n }\\n }\\n } else {\\n if (valueInA) {\\n // Value should move to B\\n setTabAValues((prev) =>\\n prev.filter((v) => v.value !== data.value)\\n );\\n setTabBValues((prev) => [\\n ...prev,\\n {\\n tab: \'B\',\\n value: data.value,\\n },\\n ]);\\n }\\n }\\n setLastMove(data);\\n });\\n hub.start();\\n return () => {\\n hub.stop();\\n };\\n }, []); // Empty dependency array is fine now because we\'re using refs\\n
\\nIf you notice, I also made a comment about “functional updates.”
\\nIn React, a “functional update” takes in the state’s previous value and acts on that instead of directly modifying the state. This ensures that you can basically do an update in the components lifecycle on the latest value vs. attempting to act on something that may be missed in a re-render. The useRef
usage should cover this, but this is an important additional point when dealing with closures.
With the resolved code written, you should now see something like this where the values correctly pass back and forth between the tabs:
\\nWhen I worked on a resolution to the production issue I mentioned, I went through a fairly exhaustive set of steps debugging the backend processes first and working my way up to the frontend.
\\nClosure issues can often be frustrating, because on the surface it appears that the updates are handled correctly. The biggest takeaway I had with this issue was to incrementally follow the state as it is passed through a process. To correctly figure out my team’s closure issue, I did both step debugging and walked through the data change at each step.
\\nWith SignalR, this can be difficult because you need something to trigger the update to receive it on the client side. Ultimately, I recommend tracing through a process before jumping straight into a solution when you see issues like this.
\\nConclusion
\\nIn this article you learned how to:
\\nsetTimeout
functionAs I mentioned throughout the article, closures can be frustrating at times (especially when dealing with production). The best thing I have found is to understand how your application is managing state, and then trace processes on that state when seeing issues.
\\nI hope this article has helped you to understand closures, and how you can work with them in React specifically. Thanks for reading my post!
\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nBack in the good old days, the limits of CSS made even “simple” things like vertical centering a challenge, with some developers even relying on JavaScript solutions. It was fragile and very constrained, and there was always that one exception that made it fail.
\\nWhether we were trying to align an icon or image beside the text, create one of those popular hero banners, or create a modal overlay, centering things on the vertical axis was always a struggle.
\\nBut CSS has come a long way since, providing many methods that make vertical centering easier every time. In this article, we will look at a summary of some of them, along with their use cases and limitations.
\\nEditor’s note: This post was last updated by Rishi Purwar in March 2025 to offer a direct comparison of different alignment techniques (vertical-align
, flexbox
, grid
), add interactive CodePen examples, and expand on browser compatibility.
Vertical alignment can be seen everywhere in an application, from navigation menus, form fields, product listings, and so on. To vertically align a text or an element means to arrange it within a container or block along its vertical/y-axis. The image below demonstrates this:
\\nThe diagram explains the concept of vertical alignment in this way:
Unless explicitly stated, each strategy highlighted below will work with inline elements as well. This makes sense, given that we’ll be directly transforming their position or display properties.
\\nThis table provides a high-level comparison of the different approaches, helping you choose the best method based on your layout needs:
\\nMethod | \\nCSS properties used | \\nWorks with | \\nKey advantages | \\nBrowser support | \\n
CSS Flexbox | \\nalign-items , justify-content , align-self , align-content | \\nFlex containers | \\nFlexible, responsive, minimal extra markup | \\nAll modern browsers | \\n
CSS Grid | \\nalign-items , justify-content , align-self , place-items , grid-template-rows | \\nGrid containers | \\nPrecise placement, single-line centering with place-items | \\nAll modern browsers | \\n
Absolute positioning | \\nposition: absolute , top: 50% , transform: translateY(-50%) | \\nAny positioned parent | \\nWorks without Flex or Grid | \\nWide browser compatibility | \\n
Inline & table-based | \\nvertical-align , display: table-cell , line-height | \\nInline/text elements, tables | \\nWorks with inline and text elements, table-based layouts | \\nWide browser compatibility | \\n
CSS Flexbox introduced great alignment properties (that are now forked into their own box alignment module).
\\nThese properties allow us to control how items are placed and how empty space is distributed. Previously, this would have required either magic numbers in CSS for a specific number of elements, or clever JavaScript for dynamic amounts. Now, with a few Flexbox properties, we can control how items are placed and how empty space is distributed.
\\nThe way you align items using Flexbox depends on the flex-direction
property.
When using flex-direction: row
, justify-content
controls horizontal alignment, while align-items
controls vertical alignment.
When using flex-direction: column
, it’s the opposite. justify-content
aligns items vertically and align-items
aligns them horizontally.
align-items
and justify-content
In this example, we use align-items
and justify-content
to center items vertically and horizontally within a Flex container. This approach ensures consistent alignment, making it a go-to method for flexible layouts:
See the Pen
\\nAlign on the flex container or the flex item by Rishi Purwar (@rishi111)
\\non CodePen.
align-items
and justify-content
:Browser | \\ndisplay: flex | \\njustify-content | \\nalign-items | \\n
Chrome | \\n⚠️ 21-28 (partial, -webkit- prefix), ✅ 28+ | \\n⚠️ 21-51 (partial), ✅ 52+ | \\n⚠️ 21-51 (partial), ✅ 51+ | \\n
Firefox | \\n✅ 20+ | \\n✅ 20+ | \\n✅ 20+ | \\n
Edge | \\n✅ 12+ | \\n✅12+ | \\n✅ 12+ | \\n
Safari | \\n⚠️ 7-8 (partial, -webkit- prefix), ✅ 8+ | \\n✅ 7+ | \\n✅ 7+ | \\n
align-self
In this example, we use align-self
to vertically align a flex item within its container. This is useful when individual flex items need different alignments:
See the Pen
\\nAlign on the flex container or the flex item 2 by Rishi Purwar (@rishi111)
\\non CodePen.
align-self
Browser | \\nalign-self | \\n
Chrome | \\n\\n ⚠️ 21-35 (partial), ✅ 35+ \\n | \\n
Firefox | \\n✅ 20+ | \\n
Edge | \\n✅ 12+ | \\n
Safari | \\n✅ 7+ | \\n
margin: auto
One of the simplest and most reliable ways to vertically center a flex item is by applying margin: auto
. This approach automatically adjusts the margins around the Flex item, distributing the remaining space evenly and perfectly centering it within the container:
See the Pen
\\nUsing margin: auto on a flex item by Rishi Purwar (@rishi111)
\\non CodePen.
This tactic is one of my favorites because of its simplicity. The only major limitation is that it’ll only work with a single element.
\\nalign-content
According to the CSS Box Alignment Module Level 3 specification, align-content
can be used to control alignment along the block axis in block and multicol containers. This allows centering content within these containers, similar to how it’s done in Flex and Grid layouts:
See the Pen
\\nalign-content by Rishi Purwar (@rishi111)
\\non CodePen.
align-content
:Browser | \\nalign-content | \\n
Chrome | \\n✅ 21+ | \\n
Firefox | \\n✅ 28+ | \\n
Edge | \\n✅ 12+ | \\n
Safari | \\n✅ 9+ | \\n
In this example, pseudo-elements (::before
and ::after
) are used to distribute space evenly within the Flex container. By setting their flex: 1
, they push the Flex item to the center, achieving vertical alignment without extra markup. However, this approach is not very practical for most layouts:
See the Pen
\\nPseudo-elements on a flex container by Rishi Purwar (@rishi111)
\\non CodePen.
Browser | \\npseudo-elements (:before & :after ) | \\nflex-direction: column | \\n
Chrome | \\n✅ 4+ | \\n✅ 21+ | \\n
Firefox | \\n✅ 2+ | \\n✅ 72+ | \\n
Edge | \\n✅ 12+ | \\n✅ 12+ | \\n
Safari | \\n✅ 3.1+ | \\n✅ 7+ | \\n
CSS Grid makes vertical alignment just as easy as Flexbox, giving you great control over content placement. With properties like align-items
, justify-items
, and place-items
, you can quickly center elements within a grid container—no extra tricks needed! Let’s explore how to align items vertically using Grid.
align-items
and justify-content
In this example, we use align-items: center
to vertically align grid items and justify-content: center
to center them horizontally within the grid container. This approach ensures the item stays perfectly centered without needing extra spacing tricks:
See the Pen
\\nAlign on the grid container by Rishi Purwar (@rishi111)
\\non CodePen.
align-items
and justify-content
:Browser | \\ndisplay: gri d | \\njustify-content (Grid Layout) | \\nalign-items (Grid Layout) | \\n
Chrome | \\n✅ 57+ | \\n✅ 57+ | \\n✅ 57+ | \\n
Firefox | \\n✅ 52+ | \\n✅ 52+ | \\n✅ 52+ | \\n
Edge | \\n⚠️ 12-15 (partial, -webkit- prefix), ✅ 15+ | \\n✅ 16+ | \\n✅ 16+ | \\n
Safari | \\n✅ 10.1+ | \\n✅ 10.1+ | \\n✅ 10.1+ | \\n
align-self
and justify-self
In this example, we use align-self
and justify-self
to center a grid item within its cell. align-self: center
vertically centers the item, while justify-self: center
does the same horizontally. This approach is great for precise control over individual grid items without affecting the entire grid layout:
See the Pen
\\nAlign On The Grid Item by Rishi Purwar (@rishi111)
\\non CodePen.
align-self
and justify-self
:Browser | \\nalign-self (Grid Layout) | \\njustify-self (Grid Layout) | \\n
Chrome | \\n✅ 57+ | \\n✅ 57+ | \\n
Firefox | \\n✅ 52+ | \\n✅ 45+ | \\n
Edge | \\n✅ 16+ | \\n✅ 16+ | \\n
Safari | \\n✅ 10.1+ | \\n✅ 10.1+ | \\n
place-items: center
Another beautiful and straightforward grid
implementation is applying the center
value to a place-items
property in the same grid element. All of its child elements are magically centered:
See the Pen
\\ngrid and place-items by Rishi Purwar (@rishi111)
\\non CodePen.
Browser support for place-items: center
:
Browser | \\n place-items (Grid Layout) | \\n
Chrome | \\n✅ > 59 | \\n
Firefox | \\n✅ > 45 | \\n
Edge | \\n✅ > 79 | \\n
Safari | \\n✅ > 11 | \\n
margin: auto
on a grid
itemSimilar to the Flexbox example above, using margin: auto
on a grid item allows it to automatically take up available space, centering it both vertically and horizontally within the grid container. This method is simple and requires no extra properties.
See the Pen
\\nmargin: auto on a grid item by Rishi Purwar (@rishi111)
\\non CodePen.
grid-row
When you need to place an element at a specific vertical position within a grid, grid-row
is the way to go. In this example, the item is placed in the second row to achieve vertical alignment within the grid layout:
See the Pen
\\nPseudo-elements on a grid by Rishi Purwar (@rishi111)
\\non CodePen.
grid-row
:Browser | \\ngrid-template-columns | \\ngrid-template-rows : repeat() | \\n
Chrome | \\n✅ > 57 | \\n✅ > 57 | \\n
Firefox | \\n✅ > 52 | \\n✅ > 52 | \\n
Edge | \\n⚠️ 12-15 (partial, -ms-grid-column prefix), ✅ > 16 | \\n✅ > 16 | \\n
Safari | \\n✅ > 11 | \\n✅ > 10 | \\n
Just like the Flexbox approach, we can use pseudo-elements with CSS Grid to create vertical alignment. By defining a three-row grid and placing empty ::before
and ::after
elements in the first and third rows, we push the main element into the center without extra markup:
See the Pen
\\nPseudo-elements on a grid by Rishi Purwar (@rishi111)
\\non CodePen.
Absolute positioning offers a reliable way to vertically align elements. By using properties like position
, margin: auto
, and transform
, we can center elements within their containers without depending on Flexbox or Grid.
position: absolute
and margin: auto
One way to vertically center an element is by using position: absolute
along with margin: auto
. By setting inset: 0
, the element is constrained within its container, and margin: auto,
automatically distributes the available space, centering it:
See the Pen
\\nAbsolute positioning and margin: auto by Rishi Purwar (@rishi111)
\\non CodePen.
The limitation to this approach is, of course, that the element height must be explicitly declared, or it will occupy the entire container.
\\nposition: absolute
and translateY(-50%)
This is one of the first tricks, and still a go-to, for many developers. By relying on absolute positioning, the inner element at 50 percent from the top and left of its parent, we can then translate it up to 50 percent of its height:
\\nSee the Pen
\\nThe classic top:50%, translateY(-50%) by Rishi Purwar (@rishi111)
\\non CodePen.
A fairly solid approach, with the only major limitation being the use of translate
that might get in the way of other transforms, for example, when applying transitions or animations.
Inline and table-based methods offer simple ways to vertically align content, especially for text and table elements. While vertical-align
works well for inline elements, display: table-cell
provides a reliable way to align content within table-based layouts. However, these approaches come with limitations and are less flexible compared to modern CSS techniques like Flexbox and Grid.
display: table-cell
and vertical-align
This is a really simple approach, and one of the first (back in the day, everything was centered around tables). We’ll use the behavior of table cells and vertical-align
to center an element on a container.
This can be done with actual tables or using semantic HTML, switching the display of the element to table-cell
:
See the Pen
\\nCentering with tables by Rishi Purwar (@rishi111)
\\non CodePen.
But, bear in mind that this totally fails on screen readers (even if your markup is based on divs, setting the CSS display to table
and table-cell
makes screen readers interpret it as an actual table). So, it’s far from the best when it comes to accessibility.
display: table-cell
and vertical-align
CSS properties have great browser support, making it a reliable choice for vertical alignment, especially when working with table-like layouts.
vertical-align
for inline elementsYou can also use the vertical-align
property to center inline, inline-block, or table cell elements vertically. One of the many applications for this approach is to vertically align an image with text or to vertically align the contents of a table cell:
See the Pen
\\nUsing vertical-align for inline elements by Rishi Purwar (@rishi111)
\\non CodePen.
The fact that this method doesn’t work with the block element could be a deal breaker. Apart from that, it works reasonably well and is supported by older browsers.
\\nline-height
By default, this vertically aligns your text by sharing an equal proportion of space around the text, creating an illusion of vertical centering.
\\nWhen the line-height
value is greater than the font size of the text, we can, by default, get extra spacing, and the extra space is then distributed evenly above and below the text. This makes the text appear vertically centered within its containing element. The implementation of this is straightforward:
See the Pen
\\nline-height by Rishi Purwar (@rishi111)
\\non CodePen.
line-height
line-height
has full support for all modern browsers; feel free to make the most of it.
Another oldie that didn’t catch up for whatever reason is using inline-block
with a ghost (pseudo) element that has 100% height of the parent, then setting vertical-align: middle
for both the pseudo-element and the element we want to center:
See the Pen
\\nThe ghost element method by Rishi Purwar (@rishi111)
\\non CodePen.
It works quite well, with the most noticeable catch being that it moves the horizontal center just a tiny bit to the right because of the always cringy behavior of white space between inline-block
elements.
This can be dealt with by adjusting the margin on the pseudo-element. In our case, we’ve assigned margin-left: -0.5ch
. We can also get a perfect centering by setting the font size to 0
on the container and resetting it to px
or rem
on the element:
See the Pen
\\nThe ghost element method 2 by Rishi Purwar (@rishi111)
\\non CodePen.
The brain has an observed pattern; it is designed to read and easily assimilate concepts from the left to the right (left aligned), which makes reading large blocks of text easier.
\\nVertical alignment may stand out aesthetically, but those with reading difficulties may find it challenging to work with. Vertically aligned text should be minimal for the sake of accessibility. This means it should be limited to headers to accommodate users with reading impairments.
\\nVertically aligned, large text is known to reduce reading speed because readers need to pause before finding the next line. This isn’t advisable for long text. If you must slow down the reading speed, it should be for the right reasons, such as emphasizing specific content. Otherwise, text alignment should be kept simple for ease of reading.
\\n\\nFor vertical alignment, it is more advisable to settle for CSS Flexbox or Grid in most cases because these tools offer a cleaner and more responsive approach compared to mimicking tables with CSS and the rest.
\\nCSS has come a long way, making vertical alignment easier than ever. We’ve explored some of the best techniques, each with its use cases and limitations. Modern solutions like Flexbox and Grid offer flexibility and responsiveness, while classic methods still come in handy for compatibility. The key is knowing when to use which approach.
\\n\\nGot a favorite vertical alignment trick? Drop it in the comments; we’d love to hear it!
\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nMicrosoft is porting the TypeScript compiler to Go, resulting in a 10x speed boost. This article explains why Go was chosen over Rust and C#, why the compiler was ported instead of rewritten, and what this means for developers — including faster builds, improved CI/CD performance, and better editor responsiveness.
\\nIf you are a developer who has been working in the JavaScript/TypeScript ecosystem for a long time, the last couple of weeks have been quite interesting. In what can be considered one of the most pivotal moments of the past decade, Microsoft announced that they are porting the TypeScript compiler to Go. While the port to an entirely new language is big news in and of itself, it was also announced that this move will result in a 10x faster compiler.
\\nIn this article, we will look into the significance of this move and what it means for TypeScript developers. Let’s dive in!
\\nWhen the language was originally designed in 2012, the TypeScript team chose to implement the compiler in TypeScript itself. This meant that the TypeScript code written by developers like us would also pass through code written in TypeScript (the compiler). This decision was made to ensure that the compiler could be easily maintained and extended by the community.
\\nAnother reason was that, in 2012, the language was mainly being used in UI development tasks instead of compute-intensive applications.
\\nHowever, as the language grew in popularity and complexity, the compiler’s performance became a bottleneck for many developers. In large projects, the TypeScript compiler started taking a significant amount of time to build and compile code. This was especially true for projects with millions of lines of code and complex type systems. For instance, from the data presented in the official blog post, the VS Code repo (with more than 1.5 million lines of code) was taking 77.8 seconds to compile – not an insignificant amount of time!
\\nThis prompted the maintainers to look for ways to improve the compiler’s performance. They decided to port the compiler to Go and called the entire effort ‘Project Corsa’. After the port, the same repository was compiled in just 7.5 seconds!
\\nThese benefits are not just limited to large repositories like VS Code but are expected to be seen across the board. For another reference, the rxjs repo (with about 2100 lines of code) was taking 1.1 seconds, which was reduced to 0.1 seconds after the port.
\\nThis means that we are literally seeing a 10x improvement in the compilation times across the whole spectrum of projects.
\\nOne question that might come to your mind is: How was a large codebase like TypeScript built from the ground up in a new language like Go so quickly? Well, it wasn’t.
\\nThe elegance of this solution is that the compiler was not re-written from scratch. Instead, all the code in the repository was programmatically translated into its Go equivalent. This means that the various parts that make up the TypeScript compiler — like the scanner, parser, binder, and type checker — were all “lifted and shifted” to Go.
\\nOne of the TypeScript maintainers shares more details about why porting was chosen as the approach here. The main reasons are:
\\nAnother advantage of this approach is that most of the code can be ported over to Go with automated scripts and only the critical parts can be re-written. This shows when we compare the respective files from both the codebases. For instance, this is what a helper method called reportCircularityError
looks like in the checker.ts
file in the TypeScript codebase:
function reportCircularityError(symbol: Symbol) {\\n const declaration = symbol.valueDeclaration;\\n // Check if variable has type annotation that circularly references the variable itself\\n if (declaration) {\\n if (getEffectiveTypeAnnotationNode(declaration)) {\\n error(symbol.valueDeclaration, Diagnostics._0_is_referenced_directly_or_indirectly_in_its_own_type_annotation, symbolToString(symbol));\\n return errorType;\\n }\\n // Check if variable has initializer that circularly references the variable itself\\n if (noImplicitAny && (declaration.kind !== SyntaxKind.Parameter || (declaration as HasInitializer).initializer)) {\\n error(symbol.valueDeclaration, Diagnostics._0_implicitly_has_type_any_because_it_does_not_have_a_type_annotation_and_is_referenced_directly_or_indirectly_in_its_own_initializer, symbolToString(symbol));\\n }\\n }\\n else if (symbol.flags & SymbolFlags.Alias) {\\n const node = getDeclarationOfAliasSymbol(symbol);\\n if (node) {\\n error(node, Diagnostics.Circular_definition_of_import_alias_0, symbolToString(symbol));\\n }\\n }\\n\\n return anyType;\\n}\\n\\n
This is the equivalent method in the checker.go
file in the Go codebase:
func (c *Checker) reportCircularityError(symbol *ast.Symbol) *Type {\\n declaration := symbol.ValueDeclaration\\n // Check if variable has type annotation that circularly references the variable itself\\n if declaration != nil {\\n if declaration.Type() != nil {\\n c.error(symbol.ValueDeclaration, diagnostics.X_0_is_referenced_directly_or_indirectly_in_its_own_type_annotation, c.symbolToString(symbol))\\n return c.errorType\\n }\\n // Check if variable has initializer that circularly references the variable itself\\n if c.noImplicitAny && (!ast.IsParameter(declaration) || declaration.Initializer() != nil) {\\n c.error(symbol.ValueDeclaration, diagnostics.X_0_implicitly_has_type_any_because_it_does_not_have_a_type_annotation_and_is_referenced_directly_or_indirectly_in_its_own_initializer, c.symbolToString(symbol))\\n }\\n } else if symbol.Flags&ast.SymbolFlagsAlias != 0 {\\n node := c.getDeclarationOfAliasSymbol(symbol)\\n if node != nil {\\n c.error(node, diagnostics.Circular_definition_of_import_alias_0, c.symbolToString(symbol))\\n }\\n }\\n\\n\\n return c.anyType\\n}\\n\\n
Notice how each line can be compared and mapped to its equivalent in the Go codebase. This is the power of the porting approach.
\\nWhen the performance aspect was identified as a bottleneck, the TypeScript team started looking into ways to improve the compiler. As explained by Anders Hejlsberg in this video, they considered multiple languages: Rust, C# (which is Microsoft’s own in-house favorite), and Go. There were several pros and cons for each language.
\\nRust is a systems programming language known for its performance and safety. It’s growing in popularity among developers for performance-critical applications, so it could have been a great option for this project.
\\nC# is Microsoft’s own language and is used in many of its products. If it had been chosen, it would have allowed the team to leverage the existing knowledge and tools.
\\nThe team ultimately chose Go as the language to port the TypeScript compiler. Go is known for its fast compilation times and low memory usage. Some other technical reasons for choosing Go include:
\\nHowever, the more important reason for choosing Go is its semantic similarity to TypeScript and the “portability” that we saw in the previous section. This was a key factor in the decision-making process. More details about the team’s decision process can be found here.
\\nHejlsberg mentioned that most of the parts related to porting the compiler are complete, while the type checker is about 80 percent complete.
\\n\\nActive development is now focused on the language service. While most of the performance gains are attributed to using a native language like Go, the rest of the improvements come from other fine-tuning that the team is doing. One such improvement is leveraging concurrency – for instance, running four instances of the type checker instead of just one. Another boost comes from re-architecting the language service to better align with the Language Server Protocol.
\\nThe Language Server Protocol (LSP) is now widely used by modern language services. However, when TypeScript was originally created, LSP didn’t exist. Porting TypeScript to Go gives the team an opportunity to re-architect the language service to better align with the LSP
\\nOne of the main benefits of this transition is improved performance, with no extra effort required from developers. Once a developer updates to TypeScript v7, the new Go-based compiler will automatically be used when running tsc
.
A faster language service (which VS Code uses for providing IntelliSense, code navigation, etc.) is also expected to be a part of the benefits. This will result in faster editor startup times and better responsiveness.
\\nOne of the immediate benefits that developers can expect is faster build times. This is especially true for CI/CD pipelines where the build times can be significantly reduced. This will result in faster feedback loops and quicker deployments.
\\nWhen a huge TypeScript repo is loaded in a code editor like VS Code, we notice that there is a significant delay in the time it takes for the editor to load the files, set up the links between files, and provide IntelliSense. With the new compiler, this is expected to be significantly faster. Even the linting process is expected to be faster, which would make the red squiggly lines (a sight every developer detests) appear faster.
\\nAnother area where developers can expect to see improvements is in the hot reload times. When a developer makes a change in the code and saves it, the time it takes for the changes to reflect in the browser is expected to be faster. This is because the compiler can process the incrementally changed files faster.
\\nThe TypeScript v5.9 release is expected soon, with the codebase continuing to be written in TypeScript through the v6.x versions. During this phase, some features will be deprecated, and some breaking changes will be introduced to prepare for the transition to a native compiler. The fully native Go-based compiler is anticipated to be released with TypeScript v7, following the completion of the v6.x series.
\\nWhat Hejlsberg and the team have accomplished here is the TypeScript equivalent of running a four-minute mile. It’s a significant milestone in the history of TypeScript. While the performance benefits are sure to trickle down to the developers and the ecosystem, this will also inspire other libraries in the TypeScript ecosystem to push the limits of what is possible.
\\nI don’t know about you, but this sure makes me excited about the future of TypeScript. What a time to be alive!
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nThe proper handling of JavaScript closures is essential to any JavaScript project.
\\nIn React projects specifically, closures can manifest themselves in ways that are not always readily apparent.
\\nIn this article, I will explain what closures are and provide examples of how to manage them. We’ll also cover a real-life example that I handled with my professional job and the production application we support.
\\nI’ll be referencing my sample project on GitHub throughout the article.
\\nA JavaScript closure is the relationship between a JavaScript function and references to its surrounding state. In JavaScript, state values have “scope” — which defines how accessible a value is. The more general concept of reference access is also called “lexical scope.” There are three main levels of scope in JavaScript:
\\n{
and }
Here is an example of scope in code:
\\n// Global Scope\\nlet globalValue = \\"available anywhere\\";\\n\\n// Function Scope\\nfunction yourFunction() { \\n // var1 and var2 are only accessible in this function\\n let var1 = \\"hello\\";\\n let var2 = \\"world\\";\\n\\n console.log(var1);\\n console.log(var2);\\n}\\n\\n// Block Scope\\nif(globalValue = \\"available anywhere\\") {\\n // variables defined here are only accssible inside this conditional\\n let b1 = \\"block 1\\";\\n let b2 = \\"block 2\\";\\n}\\n\\n
In the example code above:
\\nglobalValue
— Can be reached anywhere in the programvar1
and var2
— Can only be reached inside yourFunction
b1
and b2
— Can only be accessed when globalValue
= “available anywhere”Closures happen when you make variables available inside or outside of their normal scope. This can be seen in the following example:
\\nfunction start() {\\n // variable created inside function\\n const firstName = \\"John\\";\\n\\n // function inside the start function which has access to firstName\\n function displayFirstName() {\\n // displayFirstName creates a closure\\n console.log(firstName);\\n }\\n // should print \\"John\\" to the console\\n displayName();\\n}\\nstart();\\n\\n
In JavaScript projects, closures can cause issues where some values are accessible and others are not. When working with React specifically, this often happens when handling events or local state within components.
\\nIf you’d like a more in-depth review of closures in general, I recommend checking out our article on JavaScript closures, higher-order functions, and currying.
\\nReact projects usually encounter closure issues with managing state. In React applications, you can manage state local to a component with useState
. You can also leverage tools for centralized state management like Redux, or React Context for state management that goes across multiple components in a project.
Controlling the state of a component or multiple components requires the understanding of what values are accessible and where. When managing state in a React project, you may encounter frustrating closure issues where inconsistent changes can occur.
\\nTo better explain the concepts of closures in React, I’ll show an example using the built-in setTimeout
function. After that example in the following section, I will cover a real world production issue I had to resolve with closures. In all of these examples, you can follow along with my sample project.
Consider an application that takes in an input and does an async action. Usually you would see this with a form, or something that would take in client inputs and then pass them over to an API to do something. We can simplify this with a setTimeout
in a component like the following:
const SetTimeoutIssue = () => {\\n const [count, setCount] = useState(0);\\n const handleClick = () => {\\n setCount(count + 1);\\n // This will always show the value of count at the time the timeout was set\\n setTimeout(() => {\\n console.log(\'Current count (Issue):\', count);\\n alert(`Current count (Issue): ${count}`);\\n }, 2000);\\n };\\n return (\\n <div className=\\"p-4 bg-black rounded shadow\\">\\n <h2 className=\\"text-xl font-bold mb-4\\">setTimeout Issue</h2>\\n <p className=\\"mb-4\\">Current count: {count}</p>\\n <button\\n onClick={handleClick}\\n className=\\"bg-blue-500 text-white px-4 py-2 rounded hover:bg-blue-600\\"\\n >\\n Increment and Check After 2s\\n </button>\\n <div className=\\"mt-4 p-4 bg-gray-100 rounded\\">\\n <p className=\\"text-black\\">\\n Expected: Alert shows the updated count\\n </p>\\n <p className=\\"text-black\\">\\n Actual: Alert shows the count from when setTimeout was\\n called\\n </p>\\n </div>\\n </div>\\n );\\n};\\n\\n
This looks like something that should not have issues. The user clicks a button and a counter value is incremented and then shown in an alert modal. Where the issue happens is:
\\nconst handleClick = () => {\\n setCount(count + 1);\\n // This will always show the value of count at the time the timeout was set\\n setTimeout(() => {\\n console.log(\'Current count (Issue):\', count);\\n alert(`Current count (Issue): ${count}`);\\n }, 2000);\\n };\\n\\n
The count
value is captured by the setTimeout
function call in a closure. If you took this example and attempted to click the button multiple times in rapid succession you would see something like this:
In that screenshot, the Current Count: 1 indicates that the count
value is actually “1.” Since the setTimeout
created a closure and locked the value to the initial 0, the modal shows 0.
To resolve this issue, we can use the useRef
Hook to create a reference that always has the latest value across re-renders. With React state management, issues can occur where a re-render pulls data from a previous state.
If you just use useState
Hooks without a lot of complexity, you generally can get away with the standard getting and setting state. However, closures in particular data can have issues persisting as updates occur. Consider a refactor of our original component like the following:
const SetTimeoutSolution = () => {\\n const [count, setCount] = useState(0);\\n const countRef = useRef(count);\\n // Keep the ref in sync with the state\\n countRef.current = count;\\n const handleClickWithRef = () => {\\n setCount(count + 1);\\n // Using ref to get the latest value\\n setTimeout(() => {\\n console.log(\'Current count (Solution with Ref):\', countRef.current);\\n alert(`Current count (Solution with Ref): ${countRef.current}`);\\n }, 2000);\\n };\\n return (\\n <div className=\\"p-4 bg-black rounded shadow\\">\\n <h2 className=\\"text-xl font-bold mb-4\\">setTimeout Solution</h2>\\n <p className=\\"mb-4\\">Current count: {count}</p>\\n <div className=\\"space-y-4\\">\\n <div>\\n <button\\n onClick={handleClickWithRef}\\n className=\\"bg-green-500 text-black px-4 py-2 rounded hover:bg-green-600\\"\\n >\\n Increment and Check After 2s\\n </button>\\n <div className=\\"mt-4 p-4 bg-gray-100 rounded\\">\\n <p className=\\"text-black\\">\\n Expected: Alert shows the updated count\\n </p>\\n <p className=\\"text-black\\">\\n Actual: Alert shows the updated count\\n </p>\\n </div>\\n </div>\\n </div>\\n </div>\\n );\\n};\\n\\n
The difference in the code from the original issue is:
\\nconst [count, setCount] = useState(0);\\n const countRef = useRef(count);\\n // Keep the ref in sync with the state\\n countRef.current = count;\\n\\n const handleClickWithRef = () => {\\n setCount(count + 1);\\n // Using ref to get the latest value\\n setTimeout(() => {\\n console.log(\'Current count (Solution with Ref):\', countRef.current);\\n alert(`Current count (Solution with Ref): ${countRef.current}`);\\n }, 2000);\\n };\\n\\n
You’ll notice that we are using the countRef
value, which references the actual state value for count
. The reference persists across re-renders and thus resolves this closure issue. If you’d like more information on useRef, I recommend reviewing the LogRocket’s guide to React Refs.
In my professional role, I am a tech lead of a product team that manages an application used nationally by my company. This application handles real-time updates of data that reside in different queues. These queues are shown visually on a page with multiple tabs (one tab per queue). The page will receive messages from Azure’s SignalR service when the data is changed by backend processes. The messages received indicate how to either update the data or move it to a different queue.
\\nMy team encountered an issue where this whole process was generating multiple errors. Basically, some updates seemed to be occurring correctly, while others were missed or incorrect. This was very frustrating for our users. It was also very difficult to debug as the SignalR service operates in real time, and requires triggering messages to be sent from the server to the client.
\\nInitially, I thought that this had to be something on our backend. I walked through the backend processes that generate the SignalR messages with the devs on my team. When it became apparent that the messages were being sent correctly, I switched over to looking at the frontend project.
\\nIn a deep dive of the code, I found that the issue was basically a closure problem. We were using the SignalR client package from Microsoft, and the event handler that was receiving the messages was incorrectly acting on old state.
\\nFor the solution to my problem, I refactored the message handler and also used the useRef
hook that I had mentioned before. If you’re following along on my sample project, I’m referring to the SignalRIssue
and SignalRSolution
components.
Consider the original SignalRIssue component:
\\nimport React, { useState, useEffect } from \'react\';\\nimport { ValueLocation, MoveMessage } from \'../types/message\';\\nimport { createMockHub, createInitialValues } from \'../utils/mockHub\';\\nimport ValueList from \'./ValueList\';\\nimport MessageDisplay from \'./MessageDisplay\';\\n\\nconst SignalRIssue: React.FC = () => {\\n const [tabAValues, setTabAValues] = useState<ValueLocation[]>(() =>\\n createInitialValues()\\n );\\n const [tabBValues, setTabBValues] = useState<ValueLocation[]>([]);\\n const [activeTab, setActiveTab] = useState<\'A\' | \'B\'>(\'A\');\\n const [lastMove, setLastMove] = useState<MoveMessage | null>(null);\\n useEffect(() => {\\n const hub = createMockHub();\\n hub.on(\'message\', (data: MoveMessage) => {\\n // The closure captures these initial arrays and will always reference\\n // their initial values throughout the component\'s lifecycle\\n if (data.targetTab === \'A\') {\\n // Remove from B (but using stale B state)\\n setTabBValues(tabBValues.filter((v) => v.value !== data.value));\\n // Add to A (but using stale A state)\\n setTabAValues([\\n ...tabAValues,\\n {\\n tab: \'A\',\\n value: data.value,\\n },\\n ]);\\n } else {\\n // Remove from A (but using stale A state)\\n setTabAValues(tabAValues.filter((v) => v.value !== data.value));\\n // Add to B (but using stale B state)\\n setTabBValues([\\n ...tabBValues,\\n {\\n tab: \'B\',\\n value: data.value,\\n },\\n ]);\\n }\\n setLastMove(data);\\n });\\n hub.start();\\n return () => {\\n hub.stop();\\n };\\n }, []); // Empty dependency array creates the closure issue\\n\\n return (\\n <div className=\\"p-4 bg-black rounded shadow\\">\\n <h2 className=\\"text-xl font-bold mb-4\\">SignalR Issue</h2>\\n <div className=\\"min-h-screen w-full flex items-center justify-center py-8\\">\\n <div className=\\"max-w-2xl w-full mx-4\\">\\n <div className=\\"bg-gray-800 rounded-lg shadow-xl overflow-hidden\\">\\n <MessageDisplay message={lastMove} />\\n <div className=\\"border-b border-gray-700\\">\\n <div className=\\"flex\\">\\n <button\\n onClick={() => setActiveTab(\'A\')}\\n className={`px-6 py-3 text-sm font-medium flex-1 ${\\n activeTab === \'A\'\\n ? \'border-b-2 border-purple-500 text-purple-400 bg-purple-900/20\'\\n : \'text-gray-400 hover:text-purple-300 hover:bg-purple-900/10\'\\n }`}\\n >\\n Tab A ({tabAValues.length})\\n </button>\\n <button\\n onClick={() => setActiveTab(\'B\')}\\n className={`px-6 py-3 text-sm font-medium flex-1 ${\\n activeTab === \'B\'\\n ? \'border-b-2 border-emerald-500 text-emerald-400 bg-emerald-900/20\'\\n : \'text-gray-400 hover:text-emerald-300 hover:bg-emerald-900/10\'\\n }`}\\n >\\n Tab B ({tabBValues.length})\\n </button>\\n </div>\\n </div>\\n {activeTab === \'A\' ? (\\n <ValueList values={tabAValues} tab={activeTab} />\\n ) : (\\n <ValueList values={tabBValues} tab={activeTab} />\\n )}\\n </div>\\n <div className=\\"mt-4 p-4 bg-yellow-900 rounded-lg border border-yellow-700\\">\\n <h3 className=\\"text-sm font-medium text-yellow-300\\">\\n Issue Explained\\n </h3>\\n <p className=\\"mt-2 text-sm text-yellow-200\\">\\n This component demonstrates the closure issue where\\n the event handler captures the initial state values\\n and doesn\'t see updates. Watch as values may\\n duplicate or disappear due to stale state\\n references.\\n </p>\\n </div>\\n </div>\\n </div>\\n </div>\\n );\\n};\\nexport default SignalRIssue;\\n\\n
The component basically loads, connects to a hub (here I’ve created a mock version of the SignalR connection) and then acts when messages are received. In my mocked SignalR client, I have it using setInterval
and randomly moving values from one tab to another:
import { MoveMessage, ValueLocation } from \'../types/message\';\\nexport const createInitialValues = (): ValueLocation[] => {\\n return Array.from({ length: 5 }, (_, index) => ({\\n value: index + 1,\\n tab: \'A\',\\n }));\\n};\\nexport const createMockHub = () => {\\n return {\\n on: (eventName: string, callback: (data: MoveMessage) => void) => {\\n // Simulate value movements every 2 seconds\\n const interval = setInterval(() => {\\n // Randomly select a value (1-5) and a target tab\\n const value = Math.floor(Math.random() * 5) + 1;\\n const targetTab = Math.random() > 0.5 ? \'A\' : \'B\';\\n callback({\\n type: \'move\',\\n value,\\n targetTab,\\n timestamp: Date.now(),\\n });\\n }, 2000);\\n return () => clearInterval(interval);\\n },\\n start: () => Promise.resolve(),\\n stop: () => Promise.resolve(),\\n };\\n};\\n\\n
If you ran my sample component, you would see odd behavior like this:
\\nThere should only be one occurrence of Value1
and Value5
in that list. Instead, there are multiple, and it looks like nothing is being moved over to Tab B.
Looking at the code, you can see the closure issue here:
\\nhub.on(\'message\', (data: MoveMessage) => {\\n // The closure captures these initial arrays and will always reference\\n // their initial values throughout the component\'s lifecycle\\n if (data.targetTab === \'A\') {\\n // Remove from B (but using stale B state)\\n setTabBValues(tabBValues.filter((v) => v.value !== data.value));\\n // Add to A (but using stale A state)\\n setTabAValues([\\n ...tabAValues,\\n {\\n tab: \'A\',\\n value: data.value,\\n },\\n ]);\\n } else {\\n // Remove from A (but using stale A state)\\n setTabAValues(tabAValues.filter((v) => v.value !== data.value));\\n // Add to B (but using stale B state)\\n setTabBValues([\\n ...tabBValues,\\n {\\n tab: \'B\',\\n value: data.value,\\n },\\n ]);\\n }\\n\\n
The message handler is operating directly on the stale state when updating values. When the handler receives the messages, it’s operating on a point in the state change that is older vs. the actual value that should persist across re-renders.
\\nTo resolve this situation, you can do what I did in the setTimeout
example and go back to the useRef
Hook:
const [tabAValues, setTabAValues] = useState<ValueLocation[]>(() =>\\n createInitialValues()\\n );\\n const [tabBValues, setTabBValues] = useState<ValueLocation[]>([]);\\n const [activeTab, setActiveTab] = useState<\'A\' | \'B\'>(\'A\');\\n const [lastMove, setLastMove] = useState<MoveMessage | null>(null);\\n\\n // Create refs to maintain latest state values\\n const tabAValuesRef = useRef(tabAValues);\\n const tabBValuesRef = useRef(tabBValues);\\n\\n // Keep refs in sync with current state\\n tabAValuesRef.current = tabAValues;\\n tabBValuesRef.current = tabBValues;\\n\\n
Then in the message handler, you look for values from the reference vs. a stale read of the components state by looking at the .current
values:
useEffect(() => {\\n const hub = createMockHub();\\n hub.on(\'message\', (data: MoveMessage) => {\\n // Use refs to access current state values\\n const valueInA = tabAValuesRef.current.find(\\n (v) => v.value === data.value\\n );\\n if (data.targetTab === \'A\') {\\n if (!valueInA) {\\n // Value should move to A\\n const valueInB = tabBValuesRef.current.find(\\n (v) => v.value === data.value\\n );\\n if (valueInB) {\\n // Use functional updates to ensure clean state transitions\\n setTabBValues((prev) =>\\n prev.filter((v) => v.value !== data.value)\\n );\\n setTabAValues((prev) => [\\n ...prev,\\n {\\n tab: \'A\',\\n value: data.value,\\n },\\n ]);\\n }\\n }\\n } else {\\n if (valueInA) {\\n // Value should move to B\\n setTabAValues((prev) =>\\n prev.filter((v) => v.value !== data.value)\\n );\\n setTabBValues((prev) => [\\n ...prev,\\n {\\n tab: \'B\',\\n value: data.value,\\n },\\n ]);\\n }\\n }\\n setLastMove(data);\\n });\\n hub.start();\\n return () => {\\n hub.stop();\\n };\\n }, []); // Empty dependency array is fine now because we\'re using refs\\n\\n
If you notice, I also made a comment about “functional updates.”
\\nIn React, a “functional update” takes in the state’s previous value and acts on that instead of directly modifying the state. This ensures that you can basically do an update in the components lifecycle on the latest value vs. attempting to act on something that may be missed in a re-render. The useRef
usage should cover this, but this is an important additional point when dealing with closures.
With the resolved code written, you should now see something like this where the values correctly pass back and forth between the tabs:
\\nWhen I worked on a resolution to the production issue I mentioned, I went through a fairly exhaustive set of steps debugging the backend processes first and working my way up to the frontend.
\\nClosure issues can often be frustrating, because on the surface it appears that the updates are handled correctly. The biggest takeaway I had with this issue was to incrementally follow the state as it is passed through a process. To correctly figure out my team’s closure issue, I did both step debugging and walked through the data change at each step.
\\nWith SignalR, this can be difficult because you need something to trigger the update to receive it on the client side. Ultimately, I recommend tracing through a process before jumping straight into a solution when you see issues like this.
\\nIn this article, you learned how to:
\\nsetTimeout
functionAs I mentioned throughout the article, closures can be frustrating at times (especially when dealing with production). The best thing I have found is to understand how your application is managing state, and then trace processes on that state when seeing issues.
\\nI hope this article has helped you to understand closures, and how you can work with them in React specifically. Thanks for reading my post!
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nmain.dart
to support a showcase layout\\n Imagine this: A customer is interested in building an app for their product or service. They’re mostly sold on the idea but haven’t yet committed to signing off on development. Maybe they’re unsure, or they’re struggling to convince stakeholders who control the budget.
\\nThat recently happened to me. And the reality is, you can describe your app in vivid detail — how it will work, how it all fits together — but while you can visualize the experience clearly, your customer may not. Worse, they might not visualize anything at all.
\\nGiving customers something they can see and interact with goes a long way toward earning their buy-in.
\\nWhen building Flutter apps, I know I can target iOS and Android. But Flutter also supports web apps. So I thought, “Can I create a compelling web experience that sells someone on building a mobile app?”
\\nTurns out, yes — and the result is pretty impressive.
\\nHere’s what the demo app looks like:
\\nWe have a functional app — so let’s walk through building out the demo website.
\\nmain.dart
to support a showcase layoutTypically, your main.dart
might look like this:
class MyApp extends StatelessWidget {\\n const MyApp({super.key});\\n\\n @override\\n Widget build(BuildContext context) {\\n return MaterialApp(\\n title: \'Flutter Demo\',\\n theme: ThemeData(\\n colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),\\n ),\\n home: PetersDeliHomePage(),\\n );\\n }\\n}\\n\\n
Here, the PetersDeliHomePage
widget is the main entry point. But we can wrap the entire app in a custom widget to showcase it side-by-side with some descriptive text, without affecting its functionality.
Let’s create a ShowcaseWidget
that displays the app on the left and some marketing copy on the right:
import \'package:appdemo/home/home.dart\';\\nimport \'package:flutter/material.dart\';\\n\\nclass ShowcaseWidget extends StatelessWidget {\\n const ShowcaseWidget({super.key});\\n\\n @override\\n Widget build(BuildContext context) {\\n return Row(\\n children: [\\n Expanded(child: PetersDeliHomePage()),\\n Expanded(child: Text(\\"We\'ll put some words over here\\")),\\n ],\\n );\\n }\\n}\\n\\n
Here’s how that looks:
\\nNow the user can interact with the app while reading key feature highlights on the side. Neat, right?
\\nMost Flutter apps use MaterialApp
as the root widget to provide styling and navigation. Because MaterialApp
is just a widget, we can nest it. That’s the trick to making our demo feel like a real app inside a browser.
We’ll create a shell with a border that wraps the inner MaterialApp
:
class DemoAppShell extends StatelessWidget {\\n const DemoAppShell({super.key});\\n\\n @override\\n Widget build(BuildContext context) {\\n return Padding(\\n padding: const EdgeInsets.all(100.0),\\n child: Container(\\n height: 800,\\n width: 450,\\n decoration: BoxDecoration(\\n borderRadius: BorderRadius.circular(20),\\n border: Border.all(color: Colors.black, width: 12),\\n color: Colors.blueGrey,\\n ),\\n child: MaterialApp(\\n home: PetersDeliApp(),\\n ),\\n ),\\n );\\n }\\n}\\n\\n
We’ll also add a splash screen with a loading animation that transitions into the app:
\\nclass Splashscreen extends StatefulWidget {\\n const Splashscreen({super.key});\\n\\n @override\\n State<Splashscreen> createState() => _SplashscreenState();\\n}\\n\\nclass _SplashscreenState extends State<Splashscreen> {\\n var showLoader = false;\\n\\n @override\\n void initState() {\\n super.initState();\\n Future.delayed(Duration(seconds: 3)).then((_) {\\n setState(() => showLoader = true);\\n\\n Future.delayed(Duration(seconds: 5)).then((_) {\\n Navigator.of(context).push(\\n MaterialPageRoute(builder: (_) => PetersDeliHomePage()),\\n );\\n });\\n });\\n }\\n\\n @override\\n Widget build(BuildContext context) {\\n return Material(\\n color: Colors.teal,\\n child: Column(\\n mainAxisAlignment: MainAxisAlignment.spaceEvenly,\\n children: [\\n appIcon(),\\n Column(\\n mainAxisSize: MainAxisSize.min,\\n children: [\\n Text(\\n \\"P E T E R \' S D E L I\\",\\n style: Theme.of(context).textTheme.headlineLarge,\\n ),\\n AnimatedOpacity(\\n duration: Duration(seconds: 1),\\n opacity: showLoader ? 1 : 0,\\n child: CircularProgressIndicator(color: Colors.tealAccent),\\n ),\\n ],\\n ),\\n ],\\n ),\\n );\\n }\\n}\\n\\n
To ensure the demo works with a mouse as well as touch, update the scroll behavior to support all pointer devices:
\\nchild: MaterialApp(\\n scrollBehavior: MaterialScrollBehavior().copyWith(\\n dragDevices: {\\n PointerDeviceKind.mouse,\\n PointerDeviceKind.touch,\\n PointerDeviceKind.stylus,\\n PointerDeviceKind.unknown\\n },\\n ),\\n navigatorKey: appNavigatorKey,\\n home: PetersDeliApp(),\\n),\\n\\n
This lets users interact with the app using any input method:
\\nIt’s a simple technique, but the illusion of launching an app within the browser is effective.
\\nBecause this is an app showcase, we want two things:
\\nFor the text on the right, this should be where we describe our app and what is currently happening. Fortunately, Google has a huge fonts repository that we can use:
\\nAnd there’s the google_fonts
package, which lets us use these fonts. We can add it to our pubspec.yaml
as a dependency.
For me, I’ll choose Caveat, because I think it strikes the right balance between being casual yet professional.
\\nBecause our widgets are nested under each other in the tree, when the user clicks an option in the simulator, we’d like to have that event bubble up to the outer shell. This way, we can communicate what the user is doing, or draw their attention to a specific aspect.
\\nThe various states (or pages) within our app will be the following:
\\nenum CurrentAppState {\\n Launcher,\\n Loading,\\n MainMenu,\\n Ordering,\\n Ordered,\\n}\\n\\n
We’ll also set up a Map
describing what each step does:
const showcaseWidgets = {\\n CurrentAppState.Launcher: [\\n Text(\\n \\"When everyone\'s got money for sandwiches, why should they spend it at your deli? Click into the app to find out...\\")\\n ],\\n CurrentAppState.Loading: [\\n Text(\\"Users\' session and favourites are remembered on application load.\\")\\n ],\\n CurrentAppState.MainMenu: [\\n Text(\\"The main menu, so many tasty things!\\"),\\n ListTile(\\n title: Text(\\n \\"Favorites\\",\\n ),\\n subtitle: Text(\\n \\"Frequent orders are remembered and presented to the orderer again on subsequent orders.\\"),\\n tileColor: Colors.white,\\n ),\\n ListTile(\\n title: Text(\\"Specials\\"),\\n subtitle:\\n Text(\\"Peter\'s weekend special is displayed at the bottom of the app\\"),\\n tileColor: Colors.white,\\n ),\\n ListTile(\\n title: Text(\\"Looks good\\"),\\n subtitle: Text(\\"The picture of Peter fades out as the user scrolls up\\"),\\n tileColor: Colors.white,\\n )\\n ],\\n CurrentAppState.Ordering: [\\n Text(\\"A lot of options for people who know how they want their food\\"),\\n ListTile(\\n title: Text(\\"Ordering time shown\\"),\\n subtitle: Text(\\"People can choose when they want their food\\"),\\n tileColor: Colors.white,\\n ),\\n ListTile(\\n title: Text(\\"Some sauce, or no?\\"),\\n subtitle: Text(\\"Menu items are customisable\\"),\\n tileColor: Colors.white,\\n )\\n ],\\n CurrentAppState.Ordered: [\\n Text(\\"Straight from the kitchen out to you!\\"),\\n ]\\n};\\n\\n
Then, within our topmost widget, we need to respond to these events appropriately:
\\nclass _ShowcaseTextState extends State<ShowcaseText> {\\n var step = showcaseWidgets.entries.first;\\n\\n @override\\n Widget build(BuildContext context) {\\n return BlocListener<ShowcaseBloc, ShowcaseState>(\\n listener: (context, state) {\\n if (state is ShowcaseStep) {\\n setState(() {\\n step = showcaseWidgets.entries\\n .firstWhere((x) => x.key == state.currentStep);\\n });\\n }\\n },\\n\\n
This code is quite trivial: just set the active step based on what was set in the event.
\\nFinally, we’d like to set up one more BlocListener
for when the user places an order. This is to animate the background to give the user a good microinteraction when they click Order. Because the animation begins as white, we want to animate it forward, and then back when it finishes.
@override\\nWidget build(BuildContext context) {\\n return BlocProvider(\\n create: (context) => ShowcaseBloc(),\\n child: BlocListener<ShowcaseBloc, ShowcaseState>(\\n listener: (context, state) {\\n if (state is ShowcaseStep) {\\n if (state.currentStep == CurrentAppState.Ordered) {\\n _animationController.forward().then((x) {\\n _animationController.reverse();\\n });\\n }\\n }\\n },\\n\\n
So how does this look when it’s all put together?
\\nIt looks pretty good — and it might help sell customers who were previously undecided.
\\nBuilding apps is expensive, and customers often hesitate. But if you can show them — right in the browser — what their app might look like, you’ve got a powerful tool.
\\nDon’t get me wrong — I wouldn’t roll around pumping out apps in this showcase format for just anyone. But if a customer needs a little nudge, it might be enough to win them over.
\\n\\nPlus, the sheer portability of it is great. They can send the link to coworkers to get their input. That leads to feedback, and before you know it, people are talking about what they want added or changed in the app.
\\nBuilding a showcase like this can be a powerful tool to close a customer. And afterwards, what you ship will look and feel quite similar to what they saw in the showcase — since they’re all built using the same tool.
\\nTo give it a go on your end, check out the repository below, or use the online demo:
\\n👉 GitHub: azimuthdeveloper/fancy-flutter-showcase
\\nHopefully this has inspired you to produce a showcase for your next customer who needs a little help making a decision. Hopefully you’ll close them — and then it’s time to develop the actual app!
\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nThe rise and wide adoption of artificial intelligence (AI) in software development has sparked concerns about job security, but while AI can automate certain coding tasks, it won’t entirely replace developers. This article explores how AI became a popular coding tool, the coding processes it simplifies and automates, and the limitations it still has compared to experienced human developers.
\\nAccording to Grand View Research, the global AI market size was estimated at 196.63 billion in 2023 and is projected to grow at a Compound Annual Growth Rate (CAGR) of 36.6% from 2024 to 2030. This growth is driven by advancements in machine learning, natural language processing, and automation, all of which are transforming software development processes.
\\nThese days, AI is more than just a tool for automating repetitive tasks. After OpenAI’s GPT-3 language model demonstrated its ability to create HTML websites by following simple instructions, AI has found practical applications in software development. Since then, the AI industry has seen many breakthroughs, with AI systems now able to write computer programs based on natural language prompts.
\\nAlthough AI advancements are revolutionizing coding, the creative, problem-solving essence of computer programming will largely depend on human expertise. AI might take the job of low-skilled developers, but the demand for experienced engineers will likely be on the rise to provide product direction and architectural design. AI code generation models like GitHub Copilot and the like are already disrupting the way developers write code. These models are getting better at generating executable code.
\\nHere are some AI tools that are reinventing software development:
\\nSince the launch of ChatGPT, generative AI has become a tool embedded in our daily workflows. According to this Stack Overflow survey, 76% of all respondents are using or intend to use AI tools in their development processes, and 72% expressed a favorable or very favorable attitude toward AI.
\\nUsing AI in the software development process can lead to significant improvements in the following areas:
\\nAs AI continues to advance, the role of a developer is shifting from simply writing code to architecting, managing, reviewing, and optimizing code. Developers who embrace AI as a productivity booster will have a competitive edge.
\\nThe fear that AI will take developers’ jobs is not entirely false, but it shouldn’t be exaggerated. Historically, automation has displaced some jobs while creating new opportunities for others. The advent of mechanized farming, for example, led to the loss of some manual jobs but created new roles in machine operation and maintenance.
\\nSimilarly, while AI may automate certain aspects of software development, it is unlikely to completely replace developers. Instead, AI is here to enhance developers’ abilities, allowing them to focus on more complex, creative, and human-dependent tasks. Developers who embrace AI and learn to harness its capabilities will be positioned to thrive in this era.
\\nAI is being integrated into developers’ workflows in many ways, including:
\\nFor example, Postman integrated an AI feature, Postbot, into Postman applications. Using Postbot, developers are able to automate API testing, generate documentation, and debug API requests within Postman apps.
\\nCheck out “6 AI tools for API testing and development” for more examples of AI tools that directly help developers.
\\nDespite the great achievements AI has made in software development, human efforts remain valuable in several areas.
\\nBeyond coding, AI struggles with understanding complex business needs that require intuition and human interaction. AI also lacks the ability to make ethical decisions, leaving humans with the responsibility to ensure that code adheres to ethical standards and avoids biases and cross-cutting concerns.
\\nAdditionally, over-reliance on AI can result in security vulnerabilities and pose a challenge for junior developers.
\\nWhile AI can augment human efforts, it can’t entirely replace the nuanced judgment, creativity, and collaborative efforts that experienced human developers bring to the table.
\\nAs AI continues to evolve, hiring practices will likely take on new vistas. Businesses will always want to get more work done with less manpower and less expenditure. Recruiters might be forced to prioritize hiring highly skilled engineers who can leverage AI in their workflows.
\\n\\nAdditionally, employers will place a greater emphasis on soft skills such as problem-solving, communication, and collaboration. Simply being a programmer won’t be enough in the foreseeable future. Instead, software development will involve a combination of AI and human efforts in areas where AI hasn’t completely found a foothold — areas like business domain knowledge, high-level decision-making, understanding user pain points, system architecture design, stakeholder collaboration, and more.
\\nThe rise of AI-powered coding tools also introduces ethical concerns, particularly for junior developers, security, and bias. Junior developers risk over-relying on AI, which can hinder learning fundamental, critical thinking, and problem-solving skills, potentially making it harder to enter the industry as entry-level jobs become less available.
\\nSecurity vulnerabilities are another major issue, as AI-generated code can introduce risks like SQL injection, hardcoded credentials, and outdated practices, which developers may not always catch.
\\nAI models also inherit biases from their training data, leading to skewed code suggestions and even discriminatory hiring practices if AI is used in recruitment. Ethical concerns around code ownership may also arise, as AI can potentially generate solutions based on proprietary or open source code without proper attribution.
\\nTo mitigate these risks, developers should use AI as an assistant rather than a replacement, actively review AI-generated code for security flaws, and stay updated on best practices. Companies must also ensure responsible AI adoption by enforcing security guidelines and promoting mentorship for junior developers.
\\nWhile AI enhances productivity, human oversight remains essential to prevent biases, security risks, and knowledge gaps. By balancing AI adoption with critical thinking and ethical awareness, developers can effectively integrate AI without compromising quality, security, or career growth.
\\nTo stay competitive in an AI-driven industry, developers must do more than simply learn AI tools — we need to have an integrative approach to professional growth.
\\n\\nFirst, strengthening foundational programming skills is essential, as a deep understanding of core concepts allows developers to effectively leverage AI and troubleshoot AI-generated code. Second, focusing on problem-solving and critical thinking — skills that AI cannot easily replicate — will help developers tackle complex challenges and innovate beyond what AI can automate.
\\nIt’s also important that developers cultivate domain expertise in their specific industry or niche, as this contextual knowledge is invaluable for making informed decisions and creating tailored solutions.
\\nAs developers, we must prioritize continuous learning and staying up to date on emerging technologies, trends, and best practices through courses, certifications, and community events. Collaboration and communication skills are equally important, as working effectively in teams and explaining technical concepts to non-technical stakeholders will always require human interaction.
\\nTo use AI effectively without becoming overly dependent on it, developers must see AI as an augmenting tool, and look for ways to use it to automate repetitive tasks while focusing on critical areas like coding, business decisions, code reviews, debugging, and system design to maintain their skills and relevance.
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\n--
prefix\\n @property
at-rule\\n display
, z-index
)\\n CSS offers many predefined standard key-value-based properties for styling semantic HTML elements. However, while designing webpages, developers often need to repetitively use the same values for properties in several segments of stylesheets — for example, while using a primary accent color for various webpage elements.
\\nCSS now supports using custom properties, also known as CSS variables, to avoid repetitive CSS property values. Like any popular programming language, CSS also implements variables for writing clean code with a productive assignment and retrieval syntax, scoping support, and fallback values.
\\nIn this tutorial, we’ll first demystify CSS variables and then build four simple projects that utilize them. Some basic CSS knowledge is required to follow along with this tutorial. Let’s dive in!
\\nCSS variables are user-defined values that can be reused throughout a stylesheet. They are also known as custom properties. The --
prefix and var()
function is used to define and access CSS variables respectively:
:root {\\n --primary-color: #3498db;\\n}\\n\\nbutton {\\n background-color: var(--primary-color);\\n}\\n\\n
Unlike traditional CSS properties, CSS variables can be modified dynamically with JavaScript using (element.style.setProperty
). CSS variables can be changed in one place and all elements using it update automatically. They can be defined within selectors or globally using (:root
).
One of the most common use cases for CSS variables is managing websites in which numerous values are similar to those in the document. This helps to reduce the friction associated with refactoring or updating your code.
\\nEditor’s note: This article was updated by Emmanuel John in March 2025 to include instructions on setting CSS variables dynamically with JavaScript, differentiate between CSS and SASS variables, and troubleshoot common developer issues with CSS variables.
\\nTo solidify our knowledge about CSS variables, we’ll build four very simple projects:
\\nEach project should provide insights into how we can use CSS variables to take care of a wide variety of use cases.
\\nAlso referred to as custom properties or cascading variables, CSS variables have myriad use cases.
\\nCSS variables can be declared in two ways (--
prefix and @property
at-rule).
--
prefixThe --
prefix declares variables in two ways (globally and locally). The former uses the :root
selector to define global variables:
:root {\\n --primary-color: blue;\\n --font-size: 16px;\\n}\\n\\n
While the latter defines a variable inside specific elements:
\\n.card {\\n --card-bg: lightgray;\\n background-color: var(--card-bg);\\n}\\n\\n
Here, --card-bg
is only accessible inside .card
. Global variables are accessible everywhere in the stylesheet.
@property
at-ruleThe @property
at-rule allows you to be more expressive with the definition of CSS variables by allowing you to define their type, control inheritance, and set default values which act as fallback. Using the @property
at-rule ensures more predictable behavior.
@property --card-color {\\n syntax: \\"<color>\\";\\n inherits: false;\\n initial-value: #FFFFFF;\\n}\\n\\n
Here, --card-color
is declared as a CSS variable that expects <color>
value. The inherits:false;
property prevents it from being inherited by child elements, and initial-value:#FFFFFF;
sets a default color when no <color>
value is assigned.
CSS variables can be applied to elements using the var()
function:
button {\\n background-color: var(--primary-color);\\n font-size: var(--font-size);\\n}\\n\\n
If the value of --primary-color
is updated, all the elements using it will automatically change.
Like traditional CSS properties, CSS variables follow standard property rules — i.e., they inherit, can be overridden, and adhere to the CSS specificity algorithm. The value of an element is inherited from its parent elements if no custom property is defined in a specific child element, as shown in the following example.
\\nThe HTML:
\\n<div class=\\"container\\">\\n <article class=\\"post\\">\\n <h1 class=\\"post-title\\">Heading text</h1>\\n <p class=\\"post-content\\">Paragraph text</p>\\n </article>\\n</div>\\n\\n
The CSS:
\\n.container {\\n --padding: 1rem;\\n}\\n\\n.post {\\n --padding: 1.5rem;\\n}\\n\\n.post-content {\\n padding: var(--padding);\\n}\\n\\n
In this case, the .post-content
selector inherits padding value from its direct parent, .post
, with the value of 1.5rem
rather than 1rem
. You can use Chrome DevTools to see from where the specific CSS variable value gets inherited, as shown in the following preview:
You can use CSS variable inheritance to pass variable values from parent elements to child elements without re-declaring them in selectors. Also, overriding variable values is possible as traditional CSS properties.
\\nCSS cascade rules handle the precedence of CSS definitions that come from various sources.
\\nCSS variables also follow the standard cascade rules as any other standard properties. For example, if you use two selectors with the same specificity score, the variable assignment order will decide the value of a specific CSS variable.
A variable assignment in a new CSS block typically overrides the existing precedence and re-assigns values to variables.
\\n\\nLet’s understand variable cascading rules with a simple example.
\\nThe HTML:
\\n<span class=\\"lbl lbl-ok\\">OK</span>\\n\\n
The CSS:
\\n.lbl {\\n --lbl-color: #ddd;\\n background-color: var(--lbl-color);\\n padding: 6px;\\n}\\n\\n.lbl-ok { --lbl-color: green }\\n\\n/* --- more CSS code ---- */\\n/* ---- */\\n\\n.lbl-ok { --lbl-color: lightgreen }\\n\\n
The above CSS selectors have the same specificity, so CSS uses cascading precedence to select the right lbl-color
value for elements. Here, we’ll get the lightgreen
color for the span
element since lightgreen
is in the last variable assignment. The color of the label may change based on the order of the above selectors.
CSS variables also work with developer-defined cascade layers that use the @layer
at-rule. To demonstrate this, we can add some cascade layers to the above CSS snippet:
@layer base, mods;\\n\\n@layer base {\\n .lbl {\\n --lbl-color: #ddd;\\n background-color: var(--lbl-color);\\n padding: 6px;\\n }\\n}\\n\\n@layer mods {\\n .lbl-ok { --lbl-color: lightgreen }\\n}\\n\\n
You can check how cascading rules overwrite variable values with Chrome DevTools:
\\nWhen using custom properties, you might reference a custom property that isn’t defined in the document. You can specify a fallback value to be used in place of that value.
\\nThe syntax for providing a fallback value is still the var()
function. Send the fallback value as the second parameter of the var()
function:
:root {\\n --light-gray: #ccc;\\n}\\n\\np {\\n color: var(--light-grey, #f0f0f0); /* No --light-grey, so #f0f0f0 is \\n used as a fallback value */\\n}\\n\\n
Did you notice that I misspelled the value --light-gray
? This should cause the value to be undefined, so the browser loads the fallback value, #f0f0f0
for the color
property.
A comma-separated list is also accepted as a valid fallback value. For example, the following CSS definition loads red, blue
as the fallback value for the gradient function:
background-image: linear-gradient(90deg, var(--colors, red, blue));\\n\\n
You can also use variables as fallback values with nested var()
functions. For example, the following definition loads #ccc
from --light-gray
if it’s defined:
color: var(--light-grey, var(--light-gray, #f0f0f0));\\n\\n
Note that it’s generally not recommended to nest so many CSS functions due to performance issues caused by nested function parsing. Instead, try to use one fallback value with a readable variable name.
\\n\\nIf your web app should work on older web browsers that don’t support custom properties, you can define fallback values outside of the var()
function as follows:
p {\\n color: #f0f0f0; /* fallback value for older browsers */\\n color: var(--light-grey); \\n}\\n\\n
If the browser doesn’t support CSS variables, the first color
property sets a fallback value. We’ll discuss browser compatibility in the upcoming section about browser support for the CSS variables feature.
Meanwhile, custom properties can get invalid values due to developer mistakes. Let’s learn how the browser handles invalid variable assignments and how to override the default invalid assignment handling behavior:
\\n:root { \\n --text-danger: #ff9500; \\n} \\n\\nbody { \\n --text-danger: 16px;\\n color: var(--text-danger); \\n} \\n\\n
In this snippet, the --text-danger
custom property was defined with a value of #ff9500
. Later, it was overridden with 16px
, which isn’t technically wrong. But when the browser substitutes the value of --text-danger
in place of var(--text-danger)
, it tries to use a value of 16px
, which is not a valid property value for color in CSS.
The browser treats it as an invalid value and checks whether the color property is inheritable by a parent element. If it is, it uses it. Otherwise, it falls back to a default color (black in most browsers).
\\nThis process doesn’t bring the correct initial value defined in the :root
selector block, so we have to define custom properties with the accepted type and initial value using the @property
at-rule, as shown in the following code snippet:
@property --text-danger {\\n syntax: \\"<color>\\";\\n inherits: true;\\n initial-value: #ff9500;\\n}\\n\\nbody { \\n --text-danger: 16px;\\n color: var(--text-danger); \\n} \\n\\n
Now, the browser renders the expected text color even if we assign an invalid value within the body
selector.
As discussed in previous examples, it’s possible to create global CSS variables using either :root
or @property
at-rule. Also, creating local variables is possible by defining variables inside child element selectors. For example, a variable defined inside header
won’t be exposed to body
.
However, if you define a variable inside a specific <style>
tag, it gets exposed to all elements that match the particular selector. What if you need to create a scoped variable that is only available for a targeted HTML segment?
By default, browsers won’t scope style tags even if we wrap them with elements like <div>
for creating scoped variables. The @scope
at-rule helps us implement scoped CSS variables with scoped style tags, as shown in the following HTML snippet:
<style>\\n button {\\n padding: 6px 18px;\\n border: none;\\n border-radius: 4px;\\n margin: 12px;\\n background-color: var(--accent-color, #4cc2e6);\\n }\\n</style>\\n\\n<div>\\n <style>\\n @scope {\\n button {\\n --accent-color: #f2ba2c;\\n }\\n }\\n </style>\\n <button>Sample button #1</button>\\n</div>\\n\\n<button>Sample button #2</button>\\n\\n
Here, the second style tag becomes scoped for the wrapped <div>
element because of the @scope
at-rule. So, the button
selector in the second style tag selects only buttons inside the parent <div>
element. As a result, --accent-color
is only available for the first button.
The first button gets the #f2ba2c
color for the background since the scoped style tag’s button
selector sets the --accent-color
variable. The second button gets the #4cc2e6
fallback background color since the --accent-color
scoped variable is not available in the global scope:
Learn more about the @scope
at rule from the official MDN documentation. @scope
is still an experimental feature, so you can use the minimal css-scope-inline
library to create scoped CSS variables in production.
Variables should be grouped logically as follows:
\\n:root {\\n /* Colors */\\n --primary-color: #3498db;\\n --secondary-color: #2ecc71;\\n\\n /* Typography */\\n --font-size-base: 16px;\\n --font-weight-bold: 700;\\n}\\n
Fallback values should be used to ensure compatibility:
\\ncolor: var(--text-color, black);\\n
If --text-color
is not defined, black
will be used as a default.
Use meaningful names (avoid --color1
, --sizeA
)
In CSS frameworks such as Bootstrap, variables make sharing a base design across elements much easier. Take the .bg-danger
class, which turns an element’s background color to red and its own color to white. In this first project, you’ll build something similar.
Get started with the first project by adding the following HTML document to a new .html
file:
<!DOCTYPE html>\\n<html lang=\\"en\\">\\n <head>\\n <meta charset=\\"UTF-8\\" />\\n <meta name=\\"viewport\\" content=\\"width=device-width, initial-scale=1.0\\" />\\n <meta http-equiv=\\"X-UA-Compatible\\" content=\\"ie=edge\\" />\\n <title>CSS Variables - Button Variations</title>\\n </head>\\n <body>\\n <section>\\n <div class=\\"container\\">\\n <h1 class=\\"title\\">CSS Color Variations</h1>\\n <div class=\\"btn-group\\">\\n <button class=\\"btn btn-primary\\">Primary</button>\\n <button class=\\"btn btn-secondary\\">Secondary</button>\\n <button class=\\"btn btn-link\\">Link</button>\\n <button class=\\"btn btn-success\\">Success</button>\\n <button class=\\"btn btn-error\\">Error</button>\\n </div>\\n </div>\\n </section>\\n </body>\\n</html>\\n\\n
The structure of this markup is pretty standard. Notice how each button element has two classes: the btn
class and a second class. We’ll refer to the btn
class, in this case, as the base class and the second class as the modifier class that consists of the btn-
prefix.
Next, add the following style tag content to the above
\\n<style>\\n * {\\n border: 0;\\n }\\n\\n :root {\\n --primary: #0076c6;\\n --secondary: #333333;\\n --error: #ce0606;\\n --success: #009070;\\n --white: #ffffff;\\n }\\n\\n /* base style for all buttons */\\n .btn {\\n padding: 1rem 1.5rem;\\n background: transparent;\\n font-weight: 700;\\n border-radius: 0.5rem;\\n cursor: pointer;\\n }\\n\\n /* variations */\\n .btn-primary {\\n background: var(--primary);\\n color: var(--white);\\n }\\n\\n .btn-secondary {\\n background: var(--secondary);\\n color: var(--white);\\n }\\n\\n .btn-success {\\n background: var(--success);\\n color: var(--white);\\n }\\n\\n .btn-error {\\n background: var(--error);\\n color: var(--white);\\n }\\n\\n .btn-link {\\n color: var(--primary);\\n }\\n</style>\\n\\n
The btn
class contains the base styles for all the buttons and the variations come in where the individual modifier classes get access to their colors, which are defined at the :root
level of the document. This is extremely helpful not just for buttons, but for other elements in your HTML that can inherit the custom properties.
For example, if tomorrow you decide the value for the --error
custom property is too dull for a red color, you can easily switch it up to #f00000
. Once you do so, voila — all elements using this custom property are updated with a single change!
Here’s what your first project should look like:
\\nYou can access the complete source code and see a live preview of this project from this CodePen.
\\nThe document.documentElement.style.setProperty
method is used to set CSS variables dynamically with JavaScript, updating CSS variables in real-time without modifying the style sheet:
document.documentElement.style.setProperty(\'--primary-color\', \'green\')\\n\\n
This will update the --primary-color
variable, affecting all the elements that use it.
To see the practical use case for this, we’ll build the second project “a light-and-dark theme”. The light theme will take effect by default unless the user already has their system set to a dark theme. On the page, we’ll create a toggle button that allows the user to switch between themes.
\\nFirst, add the following HTML structure into a new .html
file:
<!DOCTYPE html>\\n<html lang=\\"en\\">\\n <head>\\n <meta charset=\\"UTF-8\\" />\\n <meta name=\\"viewport\\" content=\\"width=device-width, initial-scale=1.0\\" />\\n <meta http-equiv=\\"X-UA-Compatible\\" content=\\"ie=edge\\" />\\n <title>CSS Variables - Theming</title>\\n </head>\\n <body>\\n <header>\\n <div class=\\"container\\">\\n <div class=\\"container-inner\\">\\n <a href=\\"#\\" class=\\"logo\\">My Blog</a>\\n <div class=\\"toggle-button-container\\">\\n <label class=\\"toggle-button-label\\" for=\\"checkbox\\">\\n <input type=\\"checkbox\\" class=\\"toggle-button\\" id=\\"checkbox\\" />\\n <div class=\\"toggle-rounded\\"></div>\\n </label>\\n </div>\\n </div>\\n </div>\\n </header>\\n <article>\\n <div class=\\"container\\">\\n <h1 class=\\"title\\">Title of article</h1>\\n <div class=\\"info\\">\\n <div class=\\"tags\\">\\n <span>#html</span>\\n <span>#css</span>\\n <span>#js</span>\\n </div>\\n <span>1st February, 2024</span>\\n </div>\\n <div class=\\"content\\">\\n <p>\\n Lorem ipsum dolor sit amet consectetur adipisicing elit.\\n <a href=\\"#\\">Link to another url</a> Eius, saepe optio! Quas\\n repellendus consequuntur fuga at. Consequatur sit deleniti, ullam\\n qui facere iure, earum corrupti vitae laboriosam iusto eius magni,\\n adipisci culpa recusandae quis tenetur accusantium eum quae harum\\n autem inventore architecto perspiciatis maiores? Culpa, officiis\\n totam! Rerum alias corporis cupiditate praesentium magni illo, optio\\n nobis fugit.\\n </p>\\n <p>\\n Eveniet veniam ipsa similique atque placeat dignissimos\\n quos reiciendis. Odit, eveniet provident fugiat voluptatibus esse\\n culpa ullam beatae hic maxime suscipit, eum reprehenderit ipsam.\\n Illo facilis doloremque ducimus reprehenderit consequuntur\\n cupiditate atque harum quaerat autem amet, et rerum sequi eum cumque\\n maiores dolores.\\n </p>\\n </div>\\n </div>\\n </article>\\n </body>\\n</html>\\n\\n
This snippet represents a simple blog page with a header, a theme toggle button, and a dummy article.
\\nNext, add the following style tag to add CSS definitions for the above HTML structure:
\\n<style>\\n :root {\\n --primary-color: #0d0b52;\\n --secondary-color: #3458b9;\\n --font-color: #424242;\\n --bg-color: #ffffff;\\n --heading-color: #292922;\\n --white-color: #ffffff;\\n }\\n\\n /* Layout */\\n * {\\n padding: 0;\\n border: 0;\\n margin: 0;\\n box-sizing: border-box;\\n }\\n\\n html {\\n font-size: 14px;\\n font-family: -apple-system, BlinkMacSystemFont, \'Segoe UI\', Roboto, Oxygen,\\n Ubuntu, Cantarell, \'Open Sans\', \'Helvetica Neue\', sans-serif;\\n }\\n\\n body {\\n background: var(--bg-color);\\n color: var(--font-color);\\n }\\n\\n .container {\\n width: 100%;\\n max-width: 768px;\\n margin: auto;\\n padding: 0 1rem;\\n }\\n\\n .container-inner {\\n display: flex;\\n justify-content: space-between;\\n align-items: center;\\n }\\n\\n /* Using custom properties */\\n a {\\n text-decoration: none;\\n color: var(--primary-color);\\n }\\n\\n p {\\n font-size: 1.2rem;\\n margin: 1rem 0;\\n line-height: 1.5;\\n }\\n\\n header {\\n padding: 1rem 0;\\n border-bottom: 0.5px solid var(--primary-color);\\n }\\n\\n .logo {\\n color: var(--font-color);\\n font-size: 2rem;\\n font-weight: 800;\\n }\\n\\n .toggle-button-container {\\n display: flex;\\n align-items: center;\\n }\\n\\n .toggle-button-container em {\\n margin-left: 10px;\\n font-size: 1rem;\\n }\\n\\n .toggle-button-label {\\n display: inline-block;\\n height: 34px;\\n position: relative;\\n width: 60px;\\n }\\n\\n .toggle-button-label .toggle-button {\\n display: none;\\n }\\n\\n .toggle-rounded {\\n background-color: #ccc;\\n bottom: 0;\\n cursor: pointer;\\n left: 0;\\n position: absolute;\\n right: 0;\\n top: 0;\\n transition: 0.4s;\\n }\\n\\n .toggle-rounded:before {\\n background-color: #fff;\\n bottom: 4px;\\n content: \'\';\\n height: 26px;\\n left: 4px;\\n position: absolute;\\n transition: 0.4s;\\n width: 26px;\\n }\\n\\n input:checked+.toggle-rounded {\\n background-color: #9cafeb;\\n }\\n\\n input:checked+.toggle-rounded:before {\\n transform: translateX(26px);\\n }\\n\\n article {\\n margin-top: 2rem;\\n }\\n\\n .title {\\n font-size: 3rem;\\n color: var(--font-color);\\n }\\n\\n .info {\\n display: flex;\\n align-items: center;\\n margin: 1rem 0;\\n }\\n\\n .tags {\\n margin-right: 1rem;\\n }\\n\\n .tags span {\\n background: var(--primary-color);\\n color: var(--white-color);\\n padding: 0.2rem 0.5rem;\\n border-radius: 0.2rem;\\n }\\n</style>\\n\\n
This snippet can be divided into two main sections: the layout section and the custom properties section. The latter is what you should focus on. As you can see, the variables are applied above in the link, paragraph, heading, and article elements.
\\nThe idea behind this approach is that, by default, the website uses a light theme, and when the box is checked, the values for the light theme get inverted to a dark variant.
\\nSince you can’t trigger these sitewide changes via CSS, JavaScript is critical here. In the next section, we’ll hook up the JavaScript code necessary to toggle between the light and dark themes.
\\nAlternatively, you could trigger a change automatically via CSS using the prefers-color-scheme
media query to detect whether the user requested a light or dark theme. In other words, you can directly update the website to use the dark variants of the light theme.
Add the following snippet to all the CSS code you just wrote:
\\n@media (prefers-color-scheme: dark) {\\n :root {\\n --primary-color: #325b97;\\n --secondary-color: #9cafeb;\\n --font-color: #e1e1ff;\\n --bg-color: #000013;\\n --heading-color: #818cab;\\n }\\n}\\n\\n
We’re listening to the user’s device settings and adjusting the theme to dark if they’re already using a dark theme.
\\nFinally, add the following script segment to the above HTML document:
\\n<script>\\n const toggleButton = document.querySelector(\'.toggle-button\');\\n toggleButton.addEventListener(\'change\', toggleTheme, false);\\n const theme = {\\n dark: {\\n \'--primary-color\': \'#325b97\',\\n \'--secondary-color\': \'#9cafeb\',\\n \'--font-color\': \'#e1e1ff\',\\n \'--bg-color\': \'#000013\',\\n \'--heading-color\': \'#818cab\'\\n },\\n light: {\\n \'--primary-color\': \'#0d0b52\',\\n \'--secondary-color\': \'#3458b9\',\\n \'--font-color\': \'#424242\',\\n \'--bg-color\': \'#ffffff\',\\n \'--heading-color\': \'#292922\'\\n }\\n };\\n\\n function toggleTheme(e) {\\n if (e.target.checked) {\\n useTheme(\'dark\');\\n localStorage.setItem(\'theme\', \'dark\');\\n } else {\\n useTheme(\'light\');\\n localStorage.setItem(\'theme\', \'light\');\\n }\\n }\\n\\n function useTheme(themeChoice) {\\n document.documentElement.style.setProperty(\\n \'--primary-color\',\\n theme\\\\[themeChoice\\\\][\'--primary-color\']\\n );\\n document.documentElement.style.setProperty(\\n \'--secondary-color\',\\n theme\\\\[themeChoice\\\\][\'--secondary-color\']\\n );\\n document.documentElement.style.setProperty(\\n \'--font-color\',\\n theme\\\\[themeChoice\\\\][\'--font-color\']\\n );\\n document.documentElement.style.setProperty(\\n \'--bg-color\',\\n theme\\\\[themeChoice\\\\][\'--bg-color\']\\n );\\n document.documentElement.style.setProperty(\\n \'--heading-color\',\\n theme\\\\[themeChoice\\\\][\'--heading-color\']\\n );\\n }\\n\\n const preferredTheme = localStorage.getItem(\'theme\');\\n if (preferredTheme === \'dark\') {\\n useTheme(\'dark\');\\n toggleButton.checked = true;\\n } else {\\n useTheme(\'light\');\\n toggleButton.checked = false;\\n }\\n</script>\\n\\n
Now let’s break down the current state of the website.
\\nA user visits the page. The media query prefers-color-scheme
determines whether the user is using a light or dark theme. If it’s a dark theme, the website updates to use the dark variants of the custom properties.
Let’s say a user isn’t using a dark theme or their OS doesn’t support a dark theme. The browser would default to the light theme, allowing the user to control that behavior by checking or unchecking the box.
\\nDepending on whether the box is checked or unchecked, the useTheme()
function is called to pass in the theme variant and save the user’s current selection to local storage. You’ll see why it’s saved in a minute.
The useTheme()
function is where all the magic happens. Based on the theme variant passed, a lookup is performed on the theme
constant and used to switch between light and dark modes.
The last piece of the puzzle is persisting the current theme, which is achieved by reading the last preferred theme from local storage and setting it automatically when the user revisits the website.
\\nHere’s what your second project should look like:
\\nYou may be thinking of a million other ways to achieve this. Feel free to go through the code and make as many changes as you see fit. You can access the complete source code and see a live preview of this project from this CodePen.
\\nIn our third project, we’ll build a responsive login form that loads some adjustment values from CSS variables. Like the media query feature dynamically switches standard CSS properties, it also switches custom properties, so we can assign different values for variables within different responsive breakpoints.
\\nFirst, add the following content into a new HTML document:
\\n<!DOCTYPE html>\\n<html lang=\\"en\\">\\n\\n<head>\\n <meta charset=\\"UTF-8\\" />\\n <meta name=\\"viewport\\" content=\\"width=device-width, initial-scale=1.0\\" />\\n <meta http-equiv=\\"X-UA-Compatible\\" content=\\"ie=edge\\" />\\n <title>Responsive design with CSS variables</title>\\n</head>\\n\\n<body>\\n <div class=\\"form-box\\">\\n <input type=\\"text\\" value=\\"Username\\" />\\n <input type=\\"password\\" value=\\"Password\\" />\\n <button>Login</button>\\n </div>\\n</body>\\n\\n</html>\\n\\n
Here we created a simple login form that consists of two input elements and a button. Add the following style tag into this HTML document to style it properly:
\\n<style>\\n /* --- desktops and common --- */\\n :root {\\n --form-box-padding: 8px;\\n --form-box-flex-gap: 8px;\\n --form-input-font-size: 12px;\\n }\\n\\n * {\\n margin: 0;\\n padding: 0;\\n box-sizing: border-box;\\n }\\n\\n .form-box {\\n display: flex;\\n justify-content: flex-end;\\n gap: var(--form-box-flex-gap);\\n padding: var(--form-box-padding);\\n background-color: #333;\\n text-align: center;\\n }\\n\\n .form-box input,\\n .form-box button {\\n font-size: var(--form-input-font-size);\\n padding: 8px;\\n margin-right: 4px;\\n }\\n\\n .form-box input {\\n outline: none;\\n border: none;\\n }\\n\\n .form-box button {\\n border: none;\\n background-color: #edae39;\\n }\\n</style>\\n\\n
The above CSS snippet styles the login form only for desktop devices, so it won’t adjust content responsively when you resize the browser, as shown in the following preview:
\\nWe can simply make this page responsive by writing some styling adjustments — i.e., changing flex-direction
— inside media query breakpoints. For padding
or font-size
-like values-based properties, we can use CSS variables instead of writing CSS properties repetitively to improve the readability and maintainability of CSS definitions.
Look at the previous CSS snippet: you will notice three CSS variables. Change those variables with media query blocks and complete the responsive screen handling code using the following code snippet:
\\n/* --- tablets --- */\\n@media screen and (min-width: 601px) and (max-width: 900px) {\\n :root {\\n --form-box-padding: 20px 12px 20px 12px;\\n --form-box-flex-gap: 12px;\\n --form-input-font-size: 14px;\\n }\\n\\n .form-box input,\\n .form-box button {\\n display: block;\\n width: 100%;\\n }\\n}\\n\\n/* --- mobiles --- */\\n@media screen and (max-width: 600px) {\\n :root {\\n --form-box-padding: 24px;\\n --form-box-flex-gap: 12px;\\n --form-input-font-size: 20px;\\n }\\n\\n .form-box {\\n flex-direction: column;\\n }\\n\\n .form-box input,\\n .form-box button {\\n display: block;\\n }\\n}\\n\\n
The above code snippet adjusts the layout for mobile and tablet screens with some standard CSS properties and custom properties. For example, it uses a different flex-direction
mode, display
mode for several elements, and the following custom property values for mobile screens:
--form-box-padding: 24px;\\n--form-box-flex-gap: 12px;\\n--form-input-font-size: 20px;\\n\\n
Test this project by resizing the browser window:
\\nTry adjusting these CSS variables and creating new ones to improve this login screen further. You can use the same strategy to use CSS variables with container queries. Check the complete source code and see a live preview from this CodePen.
\\nImagine that you need to create a colorful native checkbox list with multiple accent colors. Using different values for accent-color
via the inline style attribute is undoubtedly time-consuming since you have to define colors yourself. Hence, you may create this checkbox list dynamically with JavaScript.
However, what if this list gets rendered in a JavaScript-disabled environment, like inside a Markdown document? We can use CSS variables to generate JavaScript-free dynamic elements.
\\nLet’s create a colorful native checkbox list with CSS variables. Create a new HTML document and add the following style tag:
\\ninput[type=\\"checkbox\\"] {\\n width: 80px;\\n height: 80px;\\n --hue: calc(var(--i) * 50 + 100);\\n accent-color: hsl(var(--hue), 50%, 50%);\\n}\\n\\n
Here, we calculate a dynamic color for the accent-color
property using the hsl
color function.
For the hue input parameter, we use the --hue
variable which gets a dynamically calculated value using the --i
variable. This implementation lets us generate multiple colors by using different numbers for --i
.
Use the following HTML snippet to get multiple colorful native checkboxes:
\\n<div style=\\"text-align: center\\">\\n <input type=\\"checkbox\\" checked style=\\"--i: 0\\"/>\\n <input type=\\"checkbox\\" checked style=\\"--i: 1\\"/>\\n <input type=\\"checkbox\\" checked style=\\"--i: 2\\"/>\\n <input type=\\"checkbox\\" checked style=\\"--i: 3\\"/>\\n</div>\\n\\n
Here we set an index manually for the --i
variable via inline style attributes to generate a dynamic accent color. This approach is more productive than setting colors yourself for each checkbox element. Look at the following preview of the fourth project:
You can browse the complete source code and see a live preview from this CodePen. It’s possible to use the same strategy to generate JavaScript-free dynamic elements by adjusting any standard CSS property value, i.e., using --i
to set dynamic image filter configurations.
The following table will help you know when to use CSS variables and preprocessor variables:
\\nFeature | \\nCSS variables | \\nPreprocessor variables | \\n
---|---|---|
Scope | \\nCan be dynamically modified at runtime | \\nCompiles to static values before rendering | \\n
Use cases | \\nGreat for dynamic theming, user-controlled styles, or runtime updates | \\nGreat for working with a large-scale project that benefits from functions, mixins, and nested styles | \\n
Usage | \\nWorks directly in browsers | \\nRequires a pre-processor like Less or SASS | \\n
Performance | \\nResults in slightly higher runtime cost due to look up but mostly negligible in most cases | \\nNo runtime cost, but may impact load time due to larger stylesheets. | \\n
Runtime update | \\nCan be modified easily with JavaScript | \\nImpossible to update with JavaScript because it requires recompilation | \\n
CSS variables can be used with @keyframes
to make animations more dynamic and reusable without direct changes to the styles. However, they must be applied outside the@keyframes
since variables are not recognized in keyframes.
:root {\\n --btn-bg: #3498db; /* Default background color */\\n}\\n\\nbutton {\\n background-color: var(--btn-bg);\\n color: white;\\n padding: 12px 24px;\\n font-size: 16px;\\n border: none;\\n cursor: pointer;\\n animation: pulse 1.5s infinite alternate;\\n}\\n\\n@keyframes pulse {\\n from {\\n background-color: var(--btn-bg);\\n }\\n to {\\n background-color: lighten(var(--btn-bg), 20%);\\n }\\n}\\n\\n
The background color changes dynamically based on --btn-bg
. Adjusting --btn-bg
in :root
instantly updates the animation color!
Now, we can use JavaScript to update the CSS variable in real-time to animate the button’s color on hover or user interaction
\\ndocument.querySelector(\\"button\\").addEventListener(\\"mouseover\\", () => {\\n document.documentElement.style.setProperty(\\"--btn-bg\\", \\"#e74c3c\\");\\n});\\n\\ndocument.querySelector(\\"button\\").addEventListener(\\"mouseout\\", () => {\\n document.documentElement.style.setProperty(\\"--btn-bg\\", \\"#3498db\\");\\n});\\n\\n
The button smoothly transitions between colors when hovered!
\\nAccording to the browser compatibility table of the official MDN documentation, the CSS variables feature is widely available in all popular browser versions released after April 2017. More specifically, browsers released this feature with the following versions:
\\nAccording to these statistics, using custom properties in production apps is possible since most users nowadays use up-to-date web browsers. However, it would be prudent to analyze your audience’s browser versions before using any new native CSS feature.
\\nHere are some common mistakes with CSS variables and how to fix them:
\\nSome older browsers, like IE11, do not support CSS variables, which can cause styling issues if a fallback is not provided. A common mistake is using var(--color-primary)
without specifying an alternative. To prevent this, always include a fallback value inside var()
, such as var(--color-primary, #3498db)
, ensuring that a default color is applied if the variable is unavailable.
CSS variables cannot be used directly in media queries, as they are not evaluated in the same way as standard values. For example, defining a variable like --breakpoint-mobile: 600px
in :root
and attempting to use it inside @media (max-width: var(--breakpoint-mobile))
will not work. Instead, media queries require fixed values, so it’s best to use predefined breakpoints directly, such as @media (max-width: 600px)
, to ensure proper functionality.
display
, z-index
)CSS variables don’t work in all properties, especially those that require integer values like z-index
or display
.
:root {\\n --display-mode: flex;\\n}\\n\\n.container {\\n display: var(--display-mode); /* Won\'t work */\\n}\\n\\n
By building these simple projects, you can learn how to use CSS variables like a pro. You can use the style
attribute to apply CSS variables directly to an HTML element like this <p style=\\"color: var(--primary-color);\\">Hello, world!</p>
. Also, you can debug CSS variable issues using the browser DevTools. There’s certainly more to them than I explained, so feel free to mess around with the code to explore further.
CSS variables help simplify the way you build websites and complex animations while still allowing you to write reusable and elegant code. Using CSS variables is also possible with React Native projects that run on the React Native Web renderer.
\\nCSS variables help developers repeatedly use the same values for properties in several segments of stylesheets.
\\nCSS variables are often used to help manage websites in which numerous values are similar to those in the document. The use of CSS variables helps to reduce the friction associated with refactoring or updating your code.
\\nCSS variables can be declared using either the --
prefix or the @property
at-rule.
Hey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nBoth Bash and Zsh are important and powerful tools used to perform advanced activities that may ordinarily not be available with GUI tools. Bash is a lightweight, fast, and widely compatible command-line shell that prioritizes simplicity and portability, whereas Zsh is a more sophisticated shell that’s ideal for users that prefer customization and interactivity.
\\nBourne Again Shell, commonly known as Bash, is a command line interface and scripting language) used by Unix-based operating systems to interact with terminal commands.
\\nZ Shell, also known as Zsh, is also a Unix-based command line interpreter used to interact with terminal commands.
\\nSome of the common uses of Bash and Zsh are:
\\nLet’s compare Bash vs. Zsh, discuss the differences, and explore how to use both.
\\nFeature | \\nBash | \\nZsh | \\n
---|---|---|
Default on Linux | \\nYes | \\nNo, except for Kali Linux | \\n
Default on MacOS | \\nNo | \\nYes, since MacOS Catalina | \\n
Default on Windows | \\nNo, but with WSL and Git \\nBash, it can run on Windows \\nhosts | \\nNo. But is still installed as a \\nsecondary shell | \\n
POSIX compliance | \\nYes, 100% | \\nNo. But is still installed as a \\nsecondary shell | \\n
Auto-completion | \\nBasic level | \\nAdvanced level | \\n
Support for plugins | \\nLimited support | \\nAdvanced support using Oh \\nMy Zsh | \\n
Syntax highlighting | \\nNo, basic CLI | \\nYes | \\n
Scripting capabilities | \\nPowerful | \\nOffers more customization for an intuitive scripting experience | \\n
Customization | \\nVery limited | \\nAdvanced customization | \\n
Speed | \\nVery fast | \\nFast, but can slow down with \\nloads of plugins | \\n
Popularity | \\nVery popular because it has \\nmore default support | \\nAlso popular, especially for \\npower users | \\n
To get the best experience, it is necessary to understand how the strengths of each align with your individual needs.
\\nYou should use Bash if:
\\nYou should use Zsh if:
\\nBash, renowned for its simplicity and versatility, was originally a part of the GNU project of 1989, which remodified it from the old Bourne Shell. Over the years, Bash has been the most widely used shell, adopted by most Linux systems and MacOS. However, starting with MacOS Catalina in 2019, Zsh replaced Bash as the default shell.
\\nBash’s POSIX compliance nature has made it possible for Bash scripts to be easily ported on virtually all Unix systems.
\\nBash is lightweight, enabling it to run efficiently on resource-low systems. With such a straightforward syntax, Bash is an excellent choice for many users who prioritize simplicity without compromising on features.
\\nThough it can be used for a variety of things, Bash is ideally used for scripting and automation.
\\nBash is the default on most Linux distros and can also be used on MacOS and Windows via the Windows Subsystem for Linux(WSL).
\\nBash is found on nearly every Unix-like system, making it easier for users of these systems to start using it without going through the process of installation.
\\n\\nBash can be used to run system admin functions like maintenance, and monitoring of system resources like RAM, CPU, and disk management. Additionally, it is used to manage system users, permissions, and privileges.
\\nOn Unix-based systems like Kali and MacOS, Bash can be used to create automation for routine tasks, build and compile software, or even write execution scripts for software.
\\nThe Bash shell can be used to configure, test, and manage system networks, and communicate with remote systems via SSH setup firewalls.
\\nAutomation is a very big part of the command line interface, and the Bash shell is useful in both writing and executing scripts used to create system automation. Processes like backups, generating reports, and processing data at intervals. Its access to system resources, advanced yet simple style of commands, and native integration into Unix systems make it an effective shell.
\\nIf you use a system where Zsh or CMD is the default shell but would like to use Bash, here is a quick guide to help you set it up:
\\nYou can use Bash on a Windows host using different methods as highlighted in the table above. The most straightforward way — especially for individuals having issues with using the Windows Subsystem for Linux due to virtualization — is by using Git Bash.
\\nGit Bash is an app for Windows computers that allows Windows users to have a Git CLI experience.
\\nGo to the Git Bash download page, and download the latest version:
\\nAfter downloading, install the application. During installation, stick to any installation checkbox that is recommended, except if you need a specific feature that can be configured during the installation process:
\\nOnce fully installed, from your search bar, search for Git and click on Git Bash as an administrator:
\\nOnce everything is settled, and your Bash opens, you should see this:
\\nThis is where you can utilize Bash functions as a Windows user.
\\nIf you are a Mac user, you should know that while Zsh replaced Bash, it only did so as the default shell. Thus, Bash still exists in MacBooks. Their coexistence is similar to how CMD and PowerShell are both existing in Windows hosts, with the user having the ability to choose their default shell.
\\nTo use Bash as the default on your Mac, you can type in Bash on your terminal. This will configure the terminal to recognize any commands typed afterwards as a Bash command, until that session is closed. Alternatively, you can turn it as your default shell by using the change shell command:
\\nchsh -s /bin/bash\\n\\n
The chsh
is the command for changing the shell, the -s
command specifies the shell you are changing to, and the /bin/bash
is your shell’s path.
It is worth noting that the Bash used on Mac is outdated. You may have to manually install the latest version to enjoy more up-to-date features. To do that, use the package manager of Mac by typing brew install bash
.
Command | \\nDescription | \\n
---|---|
pwd | \\nShow current directory | \\n
cd [dir] | \\nChange directory | \\n
ls | \\nList files | \\n
cp \\\\[src\\\\] [dest] | \\nCopy file/directory | \\n
mv \\\\[src\\\\] [dest] | \\nMove/rename file/directory | \\n
rm [file] | \\nRemove file | \\n
mkdir [dir] | \\nCreate directory | \\n
touch [file] | \\nCreate an empty file | \\n
cat [file] | \\nShow file content | \\n
head [file] | \\nShow first lines of a file | \\n
tail [file] | \\nShow last lines of a file | \\n
grep \\\\[pattern\\\\] [file] | \\nSearch pattern in file | \\n
find [path] -name [pattern] | \\nFind files by name | \\n
chmod \\\\[permissions\\\\] [file] | \\nChange file permissions | \\n
ps | \\nShow running processes | \\n
top | \\nReal-time process monitor | \\n
kill [PID] | \\nKill a process by ID | \\n
df | \\nShow disk space usage | \\n
free | \\nShow memory usage | \\n
ping [host] | \\nTest network connectivity | \\n
curl [URL] | \\nTransfer data from/to server | \\n
tar -czf \\\\[archive\\\\] [dir] | \\nCreate a compressed archive | \\n
tar -xzf [archive] | \\nExtract a compressed archive | \\n
echo $[var] | \\nShow environment variable value | \\n
export [var]=[value] | \\nSet an environment variable | \\n
history | \\nShow command history | \\n
diff \\\\[file1\\\\] [file2] | \\nCompare files | \\n
fdisk -l | \\nList disk partitions | \\n
mount \\\\[device\\\\] [dir] | \\nMount a device | \\n
Zsh (Z Shell) is a Unix shell that is an improved and more sophisticated alternative to the default Bash shell. Developed in 1990, Zsh borrows features from the Korn Shell (ksh) and the C shell (csh) but adds more sophisticated features like improved tab completion, better scripting, and personalization. The flexibility and Bash-command compatibility of Zsh made it the go-to for most developers.
\\nApple replaced the default shell on macOS from Bash to Zsh in 2019 with macOS Catalina since Zsh possesses more contemporary features and better long-term support.
\\nOne of Zsh’s biggest strengths is that it comes with context-aware autocompletion built into its shell environment for commands, paths, and arguments.
\\nAside from its spelling correction, Zsh also corrects typos and suggests previously used text as you type, saving time and helping newbies write scripts more efficiently.
\\nThough Bash comes with wildcard matches, the one of Zsh is more powerful and accurate. For example, ls
**/*.txt
would search for all txt files in the current working directory.
Zsh comes with better customization on the prompt, command, and shortcuts. Additionally, it supports Oh My Zsh, a framework that allows Zsh users to access thousands of plugins for extra customization, some of which include Git, Docker, and even Python.
\\nAmong its customization benefits, it also permits users to tweak their zshrc
file to modify shell behaviour.
Its superior syntax highlighting makes it easier for Zsh users to distinguish between commands and parameters in the shell.
\\nDevelopers who spend a lot of time in the terminal would benefit from its autosuggestions, fuzzy searching, and typo correction. Multiple plugins support streamlining their workflow and save time.
\\nWhile Bash can originally perform these actions, Zsh also has the ability to perform server management and system administration using its commands. This helps ensure the MacOS Kali users with Zsh as their default shell can better manage foreground and background processes.
\\nIts plugin support lets Zsh users manage cloud and DevOps services, especially for Terraform and AWS CLI users.
\\nIn general, users who require advanced functionalities and extra customization for an intuitive flow would love its rich plugin support, high theming, and intelligent completion. While users who want to stick to the conventional smart way of doing more with fewer configurations would benefit from Bash and its built-in features.
\\nZsh is the default on Mac. Thus, we won’t be discussing its installation for Mac users in this section. Rather, we would explore using Zsh on Windows and Linux systems that don’t have it as the default shell.
\\nZsh, just like Bash, can be run on Windows using similar methods. WSL and Cygwin.
\\nDownload Cygwin from the official site by first downloading the .exe
file. Click on the downloaded .exe
file to get the full package:
During installation, type in Zsh as the shell you want to get, and click Next:
\\nIf you prefer using WSL, install it via the command wsl - - install
. Once installed, you can use
\\nthe sudo command to install Zsh. This also applies for Linux users:
sudo apt update && sudo apt install zsh -y\\n\\n
sudo
is used for superuser access. The -y
is used for smooth installation, as it automatically
\\nanswers YES
for every question asked during installation. If you’d prefer to manually install and
\\ninspect it, you can skip that param.
After installation, verify that your installed tool is installed with this command: zsh -- version
.
\\nOnce your installation is completed, you can then proceed to make it your default shell using
\\nchsh -s $(which zsh)
.
Oh My Zsh is an open-source Zsh framework used to add extra functionalities for Zsh, turbocharging the entire Zsh user experience. Oh My Zsh’s extra advanced features cause users who frequently use the terminal to gravitate towards Zsh.
\\nTo install Oh My Zsh from your Zsh terminal, you can either use curl
, fetch
,or wget
.
Using curl
:
sh -c \\"$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)\\"\\n\\n
Using fetch
:
sh -c \\"$(fetch -o - https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)\\"\\n\\n
Using wget
:
sh -c \\"$(wget -O- https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)\\"\\n\\n
If for some reason, you are unable to use the raw.githubusercontent.com, kindly replace the
\\nentire command with https://install.ohmyz.sh/
.
After Oh My Zsh has been installed, restart your terminal to effect the changes.
\\ngoogle oh-my-zsh
or web-search google oh-my-zh
)Due to the number of plugins Zsh supports, it can run very slowly. Users who may require both speed and advanced customizations may be caught in a loop on which to choose.
\\nLuckily for them, just as VS Code extensions can be disabled to make them lighter, the same can be said for Zsh (Oh My Zsh).
\\nDisabling Zsh’s plugins can be quite tricky, but you have to understand that you should not delete the script itself — just the plugins. You can either do this via the command method or by manually deleting from the file.
\\nThe command method of disabling a plugin is: omz plug-in disable [plugin name]. i.e. omz plug-in disable git
.
To confirm that your plugin is disabled, run omz plugin list
.
The manual method of removing your plugin is via the zshrc
file. Open the file via the command nano ~/.zhsrc
Locate the plugin line and remove the plugin you want from this list, then exit the file.
Bash and Zsh are both important tools for scripting and working around different things in the interactive shell, but their distinctive differences make each shell more applicable for users with different needs.
\\nUltimately, those who love simplicity and just want to get their stuff done would choose Bash. Advanced users (or those whose work has turned the terminal their second home) might gravitate towards Zsh.
\\nBut then, there is no crime in having both running on the same system, right? That’s one thing Mac users enjoy out-of-the-box. But regardless of your system, you can still run any or both of these powerful shells with just some minimal workarounds, which have been explored here.
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nIn this post, we’ll evaluate 10 React Native charts libraries:
\\nThe cross-platform mobile app framework React Native has many open-source libraries that help to represent data in charts and graphs. When choosing a React Native chart or graph library, the following criteria must be considered:
\\nWe’ll explore these React Native chart and graph libraries and evaluate them based on the criteria mentioned above to determine which ones are most suitable for specific requirements.
\\nEditor’s note: This article was last reviewed and updated by Emmanuel John to add information on additional React native charts libraries, including RNE-Pro, react-native-wagmi-charts, and D3.js . For a deeper dive into React Native libraries, check out our guide to the best component libraries and the top routing libraries.
\\nThe React Native community has reported several issues with data visualization libraries like CLCchart, react-native-svg-chart and react-native-echarts-wrapper. Developers have encountered errors during installation, particularly when using Yarn. One such issue involved a failed postinstall
script when running yarn add CLCchart
.
While reviewing react-native-svg-chart library setup, I noticed some installation errors as a result of dependency conflicts when using react-native-svg versions beyond 7.0.3, as react-native-svg-charts specifies peer dependencies for versions ^6.2.1 || ^7.0.3
.
Given the lack of recent activity and the fact that these libraries have not been maintained for 6+ years now, this article’s current update excludes them.
\\nreact-native-charts-wrapper is an open source library that supports both iOS and Android devices. It is based on native charting libraries such as MPAndroidChart and iOS charts. It also offers a large number of supported chart types, including line, scatter, bubble, pie, radar, bar, combined, and candlestick.
\\nIts V0.6.0 introduces substantial improvements with an upgrade to DGCharts. Its previous iOS charting library was updated to DGCharts to prevent conflicts with Apple’s SwiftUI Charts.
\\nAs a library, react-native-charts-wrapper is well documented and explains how to get integrated into a new React Native app with a step-by-step tutorial.
It also highlights the major conventional differences between iOS and Android. For example, the color’s alpha on Android is between the range of zero to 255, and on iOS, it is zero to one. This information is helpful for someone getting started with a chart library in their mobile app for the first time.
\\nRNE-Pro is another great React Native charts library based on Apache ECharts. Its support for both charts and maps makes it very unique. It’s very easy to learn if you use Echarts for data visualization.
\\nWith its continuous updates, maintenance, and upgrades, RNE-PRO is one of the most actively maintained React Native data visualization libraries. It also has great documentation.
\\nA charting library that supports both Expo apps and React Native vanilla apps is worth serious consideration.
\\nreact-native-chart-kit is built on top of famous open source projects such as react-native-svg
, paths-js
, and react-native-calendar-heatmap
. It supports patterns such as line, bezier line, pie, progress ring, stacked bar, and contribution graph (also known as a heat map).
Some of the patterns that react-native-chart-kit supports are unique when compared to other libraries mentioned in this article. Each of the patterns has its own set of props, which makes it easier to customize data on a mobile app screen:
\\nApart from some unique patterns, react-native-chart-kit also allows you to render a responsive chart by using the Dimensions API from React Native and calculating the width of the device’s screen. Each chart component also accepts a style
prop that can be applied to the parent SVG or View
component to customize the default styles of that chart pattern.
It has noticeable issues with large datasets. One such issue is the axis labels being cut off or not rendering correctly.
\\n\\nreact-native-pie-chart is an open source library that is simple to use and offers two different variants to display data in the form of a pie chart. It is useful for scenarios where you are required to represent data in a pie chart but want to keep the bundle size of your app small. Most libraries, as discussed in this article, offer a variety of components and patterns that are usually going to increase the overall bundle size of the app.
\\nreact-native-pie-chart offers a set of props to apply custom styles or switches between the two shapes it offers. This makes it easy to configure and understand.
\\nIts version 4.0.0 release now supports labels to pie chart slices, and the ability to set a gap between pie slices. These features were highly requested by the community.
\\nLike react-native-pie-chart, react-native-responsive-linechart is dedicated to representing data in the form of lines on a mobile screen. Written completely in TypeScript, react-native-responsive-linechart has a composable API for different types of representations of a line chart. It supports adding tooltips and a large number of data points.
\\nreact-native-responsive-linechart depends on only two external libraries: react-native-svg and react-native-gesture-handler. By enabling the latter dependency, this library can support scrollable charts by setting a viewport
prop.
Lastly, because it doesn’t depend heavily on other libraries and supports only one type of charting pattern, react-native-responsive-linechart has a total package size of only 62 kilobytes (unzipped).
\\n\\n
CLCchart used to be a go-to React Native chart library for stock market data, though it has since been abandoned. react-native-wagmi-charts is a better replacement with line charts, candlestick charts, smooth data transition animations, and highly customizable APIs. This makes it a great choice for financial and stock market applications.
The library currently supports only line and candlestick charts, but the following features make it an even better alternative than other React Native financial and stock market data visualization libraries:
\\nBuilt and maintained by a team of developers at Formidable Labs, Victory is a charting library that supports different patterns in modular forms and ready-to-use components for both React and React Native applications. The React Native variant is known as Victory Native.
\\nAll of the components provided by Victory can be used to visualize data in various formats and support complete customization in terms of styles and behavior. It is easy to install and integrate this charting library in a React Native app.
\\nvictory-native has migrated from using react-native-svg as its only peer dependency to using three peer dependencies (React Native Reanimated, Gesture Handler, and Skia) which also require explicit installation. This is because react-native-svg wasn’t designed to support dynamic updates of large number of nodes over the bridge.
\\nIts current version was re-written from the ground up using Typescript, with improved performance. It’s now actively maintained by Nearform.
\\nApart from a different set of chart components that Victory supports, the library comes with many perks. One is the support of animations and transitions. With the help of an animate
prop, the animation changes can be applied to a VictoryChart
component. The animation is possible using d3-interpolate, which is a collection of interpolation methods.
The default transitions on a VictoryChart
component are customized using props such as onEnter
and onExit
:
The next advantage that Victory has over other libraries mentioned in this article is the support for material and grayscale design themes. A set of colors can be defined in the form of an array, and the typography to represent the data such as the font family, font size, and letter spacing. To label a specific data set or a representation of data in a chart, VictoryTooltip
is also available to add tooltips to charts.
Victory offers many configuration options and information on how to create your own custom chart components.
\\nreact-native-gifted-charts is a powerful library used for creating visually appealing and interactive charts in React Native applications.
\\nreact-native-gifted-charts comes with everything you need to create beautiful and animated bar, line, area, pie, donut, and stacked bar charts in React Native.
\\nreact-native-gifted-charts is clickable and scrollable, includes 3D and gradient effects, and also provides smooth animations that can be implemented using the LayoutAnimation
prop. react-native-gifted-charts enables you to add animations to your charts that occur both when the chart is initially loaded and when its values are changed. This means that when a chart’s value is updated, users will experience a smooth layout transition rather than a sudden change.
Its latest released version 1.4.57 now supports new chart types such as population pyramid and radar charts. It also has smoother animations and efficient rendering with a large dataset.
\\nOne of the key benefits of using react-native-gifted-charts is its simplicity. Developers can easily integrate it into their projects and start creating charts in a matter of minutes. The library is also highly flexible, allowing developers to easily modify and extend its functionality to suit their needs.
\\nD3.js is the most powerful data visualization library in React Native, widely used in both web and mobile applications. It’s a free, open-source JavaScript library and most of the data visualization libraries discussed in this article use D3.js under the hood to create data visualization elements.
D3.js has a steep learning curve and can be very complex. You’ll need to understand concepts such as CSV parser to load data, time scale, categorical scheme, linear curve, selections, linear scale, and much more to create a single chart component.
\\nD3.js allows you to build each chart element from scratch using react-native-svg, meaning you can build customized charts to meet your specific needs. It has one of the best documentation and learning resources.
\\nI’d recommend D3.js for developers who want to build high-level chart libraries and are ready to build every chart component from scratch.
\\nReact Native ECharts is an open-source library built on Apache Echarts and designed to create interactive and customizable charts for React Native applications.
\\nIts usage is similar to Echart and code can also be reused for web applications, making it cost-efficient and flexible. It supports Skia and SVG as the rendering libraries, allowing you to choose the option for specific use cases. It supports panning, zooming, and touch gestures for creating more interactive charts on mobile devices.
\\nWhen selecting a charting library, you should choose one that meets your application’s requirements.
\\nFor instance, if you need a highly customizable charting solution, libraries like react-native-charts-wrapper and react-native-gifted-charts are excellent options.
\\nFor a project that requires a lightweight chart library with basic customization, react-native-pie-chart and react-native-responsive-linechart would be better choices.
\\nUltimately, selecting the most suitable charting library depends on your specific project needs, and the decision should be based on a thorough evaluation of the available options.
\\nThe open-source charting libraries included in the list are either based on personal experience or are actively maintained and have substantial popularity in the community. All the libraries covered in this article have been tested with the current React Native version (even with the New Architecture flag enabled) and they all worked perfectly.
\\nIf you are familiar with any other charting library in the React Native ecosystem that is not mentioned in this post, leave it in the comment section below and tell us why you like it.
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nIn this post, we’ll evaluate 10 React Native charts libraries:
\\nThe cross-platform mobile app framework React Native has many open-source libraries that help to represent data in charts and graphs. When choosing a React Native chart or graph library, the following criteria must be considered:
\\nWe’ll explore these React Native chart and graph libraries and evaluate them based on the criteria mentioned above to determine which ones are most suitable for specific requirements.
\\nEditor’s note: This article was last reviewed and updated by Emmanuel John to add information on additional React native charts libraries, including RNE-Pro, react-native-wagmi-charts, and D3.js . For a deeper dive into React Native libraries, check out our guide to the best component libraries and the top routing libraries.
\\nThe React Native community has reported several issues with data visualization libraries like CLCchart, react-native-svg-chart and react-native-echarts-wrapper. Developers have encountered errors during installation, particularly when using Yarn. One such issue involved a failed postinstall
script when running yarn add CLCchart
.
While reviewing react-native-svg-chart library setup, I noticed some installation errors as a result of dependency conflicts when using react-native-svg versions beyond 7.0.3, as react-native-svg-charts specifies peer dependencies for versions ^6.2.1 || ^7.0.3
.
Given the lack of recent activity and the fact that these libraries have not been maintained for 6+ years now, this article’s current update excludes them.
\\nreact-native-charts-wrapper is an open source library that supports both iOS and Android devices. It is based on native charting libraries such as MPAndroidChart and iOS charts. It also offers a large number of supported chart types, including line, scatter, bubble, pie, radar, bar, combined, and candlestick.
\\nIts V0.6.0 introduces substantial improvements with an upgrade to DGCharts. Its previous iOS charting library was updated to DGCharts to prevent conflicts with Apple’s SwiftUI Charts.
\\nAs a library, react-native-charts-wrapper is well documented and explains how to get integrated into a new React Native app with a step-by-step tutorial.
It also highlights the major conventional differences between iOS and Android. For example, the color’s alpha on Android is between the range of zero to 255, and on iOS, it is zero to one. This information is helpful for someone getting started with a chart library in their mobile app for the first time.
\\nRNE-Pro is another great React Native charts library based on Apache ECharts. Its support for both charts and maps makes it very unique. It’s very easy to learn if you use Echarts for data visualization.
\\nWith its continuous updates, maintenance, and upgrades, RNE-PRO is one of the most actively maintained React Native data visualization libraries. It also has great documentation.
\\nA charting library that supports both Expo apps and React Native vanilla apps is worth serious consideration.
\\nreact-native-chart-kit is built on top of famous open source projects such as react-native-svg
, paths-js
, and react-native-calendar-heatmap
. It supports patterns such as line, bezier line, pie, progress ring, stacked bar, and contribution graph (also known as a heat map).
Some of the patterns that react-native-chart-kit supports are unique when compared to other libraries mentioned in this article. Each of the patterns has its own set of props, which makes it easier to customize data on a mobile app screen:
\\nApart from some unique patterns, react-native-chart-kit also allows you to render a responsive chart by using the Dimensions API from React Native and calculating the width of the device’s screen. Each chart component also accepts a style
prop that can be applied to the parent SVG or View
component to customize the default styles of that chart pattern.
It has noticeable issues with large datasets. One such issue is the axis labels being cut off or not rendering correctly.
\\n\\nreact-native-pie-chart is an open source library that is simple to use and offers two different variants to display data in the form of a pie chart. It is useful for scenarios where you are required to represent data in a pie chart but want to keep the bundle size of your app small. Most libraries, as discussed in this article, offer a variety of components and patterns that are usually going to increase the overall bundle size of the app.
\\nreact-native-pie-chart offers a set of props to apply custom styles or switches between the two shapes it offers. This makes it easy to configure and understand.
\\nIts version 4.0.0 release now supports labels to pie chart slices, and the ability to set a gap between pie slices. These features were highly requested by the community.
\\nLike react-native-pie-chart, react-native-responsive-linechart is dedicated to representing data in the form of lines on a mobile screen. Written completely in TypeScript, react-native-responsive-linechart has a composable API for different types of representations of a line chart. It supports adding tooltips and a large number of data points.
\\nreact-native-responsive-linechart depends on only two external libraries: react-native-svg and react-native-gesture-handler. By enabling the latter dependency, this library can support scrollable charts by setting a viewport
prop.
Lastly, because it doesn’t depend heavily on other libraries and supports only one type of charting pattern, react-native-responsive-linechart has a total package size of only 62 kilobytes (unzipped).
\\n\\n
CLCchart used to be a go-to React Native chart library for stock market data, though it has since been abandoned. react-native-wagmi-charts is a better replacement with line charts, candlestick charts, smooth data transition animations, and highly customizable APIs. This makes it a great choice for financial and stock market applications.
The library currently supports only line and candlestick charts, but the following features make it an even better alternative than other React Native financial and stock market data visualization libraries:
\\nBuilt and maintained by a team of developers at Formidable Labs, Victory is a charting library that supports different patterns in modular forms and ready-to-use components for both React and React Native applications. The React Native variant is known as Victory Native.
\\nAll of the components provided by Victory can be used to visualize data in various formats and support complete customization in terms of styles and behavior. It is easy to install and integrate this charting library in a React Native app.
\\nvictory-native has migrated from using react-native-svg as its only peer dependency to using three peer dependencies (React Native Reanimated, Gesture Handler, and Skia) which also require explicit installation. This is because react-native-svg wasn’t designed to support dynamic updates of large number of nodes over the bridge.
\\nIts current version was re-written from the ground up using Typescript, with improved performance. It’s now actively maintained by Nearform.
\\nApart from a different set of chart components that Victory supports, the library comes with many perks. One is the support of animations and transitions. With the help of an animate
prop, the animation changes can be applied to a VictoryChart
component. The animation is possible using d3-interpolate, which is a collection of interpolation methods.
The default transitions on a VictoryChart
component are customized using props such as onEnter
and onExit
:
The next advantage that Victory has over other libraries mentioned in this article is the support for material and grayscale design themes. A set of colors can be defined in the form of an array, and the typography to represent the data such as the font family, font size, and letter spacing. To label a specific data set or a representation of data in a chart, VictoryTooltip
is also available to add tooltips to charts.
Victory offers many configuration options and information on how to create your own custom chart components.
\\nreact-native-gifted-charts is a powerful library used for creating visually appealing and interactive charts in React Native applications.
\\nreact-native-gifted-charts comes with everything you need to create beautiful and animated bar, line, area, pie, donut, and stacked bar charts in React Native.
\\nreact-native-gifted-charts is clickable and scrollable, includes 3D and gradient effects, and also provides smooth animations that can be implemented using the LayoutAnimation
prop. react-native-gifted-charts enables you to add animations to your charts that occur both when the chart is initially loaded and when its values are changed. This means that when a chart’s value is updated, users will experience a smooth layout transition rather than a sudden change.
Its latest released version 1.4.57 now supports new chart types such as population pyramid and radar charts. It also has smoother animations and efficient rendering with a large dataset.
\\nOne of the key benefits of using react-native-gifted-charts is its simplicity. Developers can easily integrate it into their projects and start creating charts in a matter of minutes. The library is also highly flexible, allowing developers to easily modify and extend its functionality to suit their needs.
\\nD3.js is the most powerful data visualization library in React Native, widely used in both web and mobile applications. It’s a free, open-source JavaScript library and most of the data visualization libraries discussed in this article use D3.js under the hood to create data visualization elements.
D3.js has a steep learning curve and can be very complex. You’ll need to understand concepts such as CSV parser to load data, time scale, categorical scheme, linear curve, selections, linear scale, and much more to create a single chart component.
\\nD3.js allows you to build each chart element from scratch using react-native-svg, meaning you can build customized charts to meet your specific needs. It has one of the best documentation and learning resources.
\\nI’d recommend D3.js for developers who want to build high-level chart libraries and are ready to build every chart component from scratch.
\\nReact Native ECharts is an open-source library built on Apache Echarts and designed to create interactive and customizable charts for React Native applications.
\\nIts usage is similar to Echart and code can also be reused for web applications, making it cost-efficient and flexible. It supports Skia and SVG as the rendering libraries, allowing you to choose the option for specific use cases. It supports panning, zooming, and touch gestures for creating more interactive charts on mobile devices.
\\nWhen selecting a charting library, you should choose one that meets your application’s requirements.
\\nFor instance, if you need a highly customizable charting solution, libraries like react-native-charts-wrapper and react-native-gifted-charts are excellent options.
\\nFor a project that requires a lightweight chart library with basic customization, react-native-pie-chart and react-native-responsive-linechart would be better choices.
\\nUltimately, selecting the most suitable charting library depends on your specific project needs, and the decision should be based on a thorough evaluation of the available options.
\\nThe open-source charting libraries included in the list are either based on personal experience or are actively maintained and have substantial popularity in the community. All the libraries covered in this article have been tested with the current React Native version (even with the New Architecture flag enabled) and they all worked perfectly.
\\nIf you are familiar with any other charting library in the React Native ecosystem that is not mentioned in this post, leave it in the comment section below and tell us why you like it.
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nnpm
\\n In web development, creating interactive and engaging user interfaces is crucial for retaining user attention. Sliders are a popular way to showcase content in a dynamic, visually appealing format while maximizing space.
\\nWith a variety of slider libraries available, selecting one that balances performance, responsiveness, and functionality can be a challenge.
\\nIn this article, we’ll dive into Swiper.js, a powerful and flexible slider library. We’ll walk through how to install and configure it, explore its core features, and demonstrate how to integrate it with frameworks like React and Vue for seamless UI development.
\\nSwiper.js is a modern, flexible JavaScript library that enables developers to easily integrate touch-enabled sliders with smooth animations and seamless interactions into their websites and web applications. Designed with a mobile-first approach, Swiper.js ensures flawless performance on both mobile and desktop devices, supporting gestures such as swiping, scrolling, and pinch-to-zoom.
\\nAs a free and open-source library with over 40k stars on GitHub and contributions from 300+ developers, Swiper.js is regularly updated to stay ahead of evolving web standards. It also boasts a modular plugin system and works seamlessly with popular frameworks like React, Vue, and Svelte, making it a go-to choice for developers seeking dynamic, user-friendly experiences.
\\nKey features include:
\\nCreating a slider from scratch can take a lot of time, especially when dealing with different browsers or touch gestures. Swiper.js streamlines this process by providing:
\\nBefore we explore Swiper.js features, let’s install and set up the library. You can add Swiper.js to your project in several ways: using a CDN, downloading it locally, or installing it with npm.
\\nTo quickly use Swiper.js without installing anything, you can add it through a CDN. Just include the following <link>
and <script>
tags in your HTML file:
<link\\n rel=\\"stylesheet\\"\\n href=\\"https://cdn.jsdelivr.net/npm/swiper@11/swiper-bundle.min.css\\"\\n/>\\n\\n<script src=\\"https://cdn.jsdelivr.net/npm/swiper@11/swiper-bundle.min.js\\"></script>\\n\\n
If you use ES modules in the browser, there is also a CDN version available for that:
\\n<script type=\\"module\\">\\n import Swiper from \'https://cdn.jsdelivr.net/npm/swiper@11/swiper-bundle.min.mjs\'\\n\\n const swiper = new Swiper(...)\\n</script>\\n\\n
If you prefer to use the Swiper files on your computer, you can download them from here.
\\nnpm
For better project organization and management, you can install Swiper.js using npm:
\\nnpm install swiper\\n\\n
This method is great for using Swiper.js in frameworks such as React, Vue, or Next.js. Now you can import it into your JavaScript file like this:
\\n// import Swiper JS\\nimport Swiper from \'swiper\';\\n// import Swiper styles\\nimport \'swiper/css\';\\n\\nconst swiper = new Swiper(...);\\n\\n
Swiper only exports the core version by default, which does not include extra features like navigation and pagination. To use these features, you need to import them from swiper/modules
and set them up:
// core version + navigation, pagination modules:\\n\\nimport Swiper from \'swiper\';\\nimport { Navigation, Pagination } from \'swiper/modules\';\\n\\n// import Swiper and modules styles\\nimport \'swiper/swiper.min.css\';\\nimport \'swiper/modules/navigation.min.css\';\\nimport \'swiper/modules/pagination.min.css\';\\n\\n\\n// init Swiper:\\nconst swiper = new Swiper(\'.swiper\', {\\n // configure Swiper to use modules\\n modules: [Navigation, Pagination],\\n ...\\n});\\n\\n
If you want to use Swiper with all features included, you should import it from swiper/bundle
:
// import Swiper bundle with all modules installed\\nimport Swiper from \'swiper/bundle\';\\n\\n// import styles bundle\\nimport \'swiper/swiper-bundle.min.css\';\\n\\n// init Swiper:\\nconst swiper = new Swiper(...);\\n\\n
After installing Swiper.js, start by creating your HTML structure and then initialize it in JavaScript. Create a container with slides inside:
\\n<div class=\\"swiper\\">\\n <!-- Additional required wrapper --\x3e\\n <div class=\\"swiper-wrapper\\">\\n <div class=\\"swiper-slide\\">Slide 1</div>\\n <div class=\\"swiper-slide\\">Slide 2</div>\\n <div class=\\"swiper-slide\\">Slide 3</div>\\n </div>\\n</div>\\n\\n
After you set up the HTML structure, initialize Swiper in your script:
\\n// Initialize Swiper\\nconst swiper = new Swiper(\'.swiper\', {\\n // Swiper options\\n});\\n\\n
The Swiper object accepts two parameters:
\\n.swiper
) for that elementSwiper has an array of parameters that can be passed into its configuration options; visit its docs to view these.
\\nNow we have the basic setup for your Swiper slider by following the steps. Here’s what our setup looks like with some added CSS styles:
html,\\n body {\\n position: relative;\\n width: 100%;\\n height: 100%;\\n }\\n\\n body {\\n background: #eee;\\n font-family: Helvetica Neue, Helvetica, Arial, sans-serif;\\n font-size: 14px;\\n color: #000;\\n margin: 0;\\n padding: 0;\\n display: flex;\\n justify-content: center;\\n align-items: center;\\n }\\n\\n .swiper {\\n width: 600px;\\n height: 400px;\\n }\\n\\n .swiper-slide {\\n text-align: center;\\n font-size: 18px;\\n background: #fff;\\n display: flex;\\n justify-content: center;\\n align-items: center;\\n }\\n\\n .swiper-slide img {\\n display: block;\\n width: 100%;\\n height: 100%;\\n object-fit: cover;\\n }\\n\\n
As we move along, we’ll demo some of the configuration options.
\\nSwiper.js is a powerful touch slider library that can be customized easily. This section will cover:
\\nSwiper provides a variety of styles to easily customize your sliders, available in multiple formats such as CSS, Less, and SCSS for easy integration. Here’s an overview of the available styles:
\\nswiper-bundle.css
— Includes all Swiper styles, including modules like Navigation and Paginationswiper-bundle.min.css
— A minified version of the full bundleswiper/css
— Contains core Swiper styles and all module-specific stylesswiper/less
— Includes core Swiper Less styles and all module-specific stylesswiper/less/bundle
— Includes all Swiper Less styles, including modules like Navigation, Pagination, and moreswiper/scss
— Contains core Swiper SCSS styles and all module-specific stylesswiper/scss/bundle
— Includes all Swiper SCSS styles, including modules like Navigation, Pagination, and othersSwiper allows you to import only the styles you need for specific features, enhancing both performance and customization. For example, you can import swiper/css/navigation
to include only the styles for Navigation.
\\nThis modular approach ensures you’re not loading unnecessary styles, keeping your project lightweight. Check out the Swiper style documentation for more style options.
Swiper provides a set of methods and properties to control and customize its behavior in real time. These allow you to interact with the slider, navigate through slides, update settings, or even remove the instance when needed. Here are some key examples:
\\nslideNext()
— Moves to the next slideslidePrev()
— Moves to the previous slideswiper.slideTo(index, speed, runCallbacks)
— Jumps to a specific slide by its indexswiper.update()
— Updates the Swiper instance to accommodate layout changesswiper.destroy(deleteInstance, cleanStyles)
— Destroys the Swiper instance, with options to delete it and clean up stylesconst swiper = new Swiper(\'.swiper\', {\\n// Configuration options\\n});// Move to the next slide\\nswiper.slideNext();// Move to the third slide (index 2)\\nswiper.slideTo(2);\\n
swiper.activeIndex
— The index of the current slideswiper.isBeginning
— Checks if the slider is at the first slideswiper.previousIndex
— The index of the previous slide (can be set to a specific number)swiper.allowSlideNext
— Controls the ability to slide to the next slide (true/false)swiper.allowSlidePrev
— Controls the ability to slide to the previous slide (true/false)swiper.el
— The HTML element of the slider containerswiper.width
— The width of the slider containerswiper.height
— The height of the slider containerswiper.swipeDirection
— Specifies the sliding direction, either prev
or next
console.log(swiper.activeIndex); // Logs the current slide indexFor a full list of methods and properties, visit the Swiper Methods and Properties documentation.
\\nSwiper.js offers several events that enable you to interact with the slider and respond to user actions. These events are useful for adding custom features, like tracking activity or updating content dynamically. Common events include:
\\ninit
— Triggered when Swiper initializesslideChange
— Fires when the active slide changesreachBeginning
— Triggered when the first slide is reachedreachEnd
— Fires when the last slide is reachedYou can listen to events using the on
method during Swiper initialization:
const swiper = new Swiper(\'.swiper\', { \\n on: { \\n init: function () { \\n console.log(\'Swiper initialized!\'); \\n }, \\n slideChange: function () { \\n console.log(\'Slide changed to:\', this.activeIndex); \\n }, \\n }, \\n}); \\n\\n
Alternatively, you can use event listeners:
\\nconst swiper = new Swiper(\\".swiper\\", {\\n // ...\\n});\\n\\nswiper.on(\'reachEnd\', function () { \\n console.log(\'Reached the end of the slider!\'); \\n}); \\n\\n
For a complete list of events, refer to the Swiper Events documentation.
\\nSwiper.js offers several features that enhance the user experience and make sliders more interactive. In this section, we’ll cover important Swiper modules like navigation controls, pagination, scrollbars, lazy loading, autoplay, and more. These modules allow for easy customization and dynamic functionality, giving you full control over your sliders.
\\nSwiper includes built-in navigation buttons that allow users to navigate between slides. You can customize these buttons using the following parameters:
\\nprevEl
— A string representing the CSS selector or HTML element for the “previous” buttonnextEl
— A string representing the CSS selector or HTML element for the “next” buttonTo enable navigation, ensure your HTML includes the corresponding button elements:
\\n<div class=\\"swiper\\">\\n <div class=\\"swiper-wrapper\\">\\n <div class=\\"swiper-slide\\">Slide 1</div>\\n <div class=\\"swiper-slide\\">Slide 2</div>\\n <div class=\\"swiper-slide\\">Slide 3</div>\\n </div>\\n\\n <!-- Navigation Buttons --\x3e\\n <div class=\\"swiper-button-next\\"></div>\\n <div class=\\"swiper-button-prev\\"></div>\\n</div>\\n\\n
Then, enable navigation in your Swiper configuration:
\\nimport Swiper from \'swiper\';\\nimport { Navigation } from \'swiper/modules\';\\nimport \'swiper/swiper.min.css\';\\nimport \'swiper/modules/navigation.min.css\';\\n\\nconst swiper = new Swiper(\'.swiper\', {\\n modules: [Navigation],\\n navigation: { // Adds next/prev buttons\\n nextEl: \'.swiper-button-next\',\\n prevEl: \'.swiper-button-prev\',\\n },\\n});\\n\\n// Alternative\\n\\nconst swiper = new Swiper(\'.swiper\', {\\n modules: [Navigation],\\n});\\nswiper.nextEl = \'.swiper-button-next\';\\nswiper.prevEl = \'.swiper-button-prev\';\\n\\n
In the code above, we use the navigation.prevEl
and navigation.nextEl
parameters to target the .swiper-button-prev
and .swiper-button-next
classes in the HTML template. This setup links the Swiper instance with the appropriate navigation buttons. Here’s an example:
You can style the navigation buttons with CSS or use custom elements instead. For more navigation options, visit Swiper navigation modules.
\\nPagination in Swiper.js helps users track their current slide and navigate directly to specific slides. You can customize pagination using these options:
\\nbullets
— An array of HTML elements representing pagination bullets. Access a specific slide’s bullet using swiper.pagination.bullets[1]
el
— This option holds the HTML element for the pagination container.To use pagination, include a container for it, like so:
\\n<div class=\\"swiper\\">\\n <div class=\\"swiper-wrapper\\">\\n <div class=\\"swiper-slide\\">Slide 1</div>\\n <div class=\\"swiper-slide\\">Slide 2</div>\\n <div class=\\"swiper-slide\\">Slide 3</div>\\n </div>\\n\\n <!-- Pagination --\x3e \\n <div class=\\"swiper-pagination\\"></div>\\n</div>\\n\\n
Then set up the pagination option:
\\nimport Swiper from \'swiper\';\\nimport { Navigation, Pagination } from \'swiper/modules\';\\nimport \'swiper/swiper.min.css\';\\nimport \'swiper/modules/pagination.min.css\';\\n\\nconst swiper = new Swiper(\'.swiper\', {\\n modules: [Pagination, Navigation],\\n // Other parameters\\n pagination: { // Enables pagination\\n el: \'.swiper-pagination\',\\n clickable: true,\\n type: \'bullets\', // \'bullets\' or \'fraction\'\\n },\\n});\\n\\n
In this setup, pagination.el
targets the .swiper-pagination
element in the HTML. Setting clickable: true
allows users to navigate slides by clicking pagination bullets. The type
option can be set to bullets
for dot indicators or fraction
to display a format like “1/5”:
For more pagination options, visit the Swiper pagination modules page.
\\nLazy loading ensures that images are only loaded when they are about to appear on the screen, improving the initial load time and performance of our application. To use lazy loading in Swiper.js, we need to set loading=\\"lazy\\"
on images and add an animated preloader element swiper-lazy-preloader
. The animated preloader adds a spinner to the slide.
Let’s see lazy loading in action:
\\n<div class=\\"swiper\\">\\n <div class=\\"swiper-wrapper\\">\\n <!-- Lazy-loaded image --\x3e\\n <div class=\\"swiper-slide\\">\\n <img src=\\"https://images.unsplash.com/vector-1739283864562-fea5e6376d2e?q=80&w=2146&auto=format&fit=crop&ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D\\" loading=\\"lazy\\" />\\n <div class=\\"swiper-lazy-preloader\\"></div>\\n </div>\\n\\n <!-- Lazy-loaded image with srcset for different resolutions --\x3e\\n <div class=\\"swiper-slide\\">\\n <img\\n src=\\"https://images.unsplash.com/vector-1738426079979-a92c2f584fc5?q=80&w=1800&auto=format&fit=crop&ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D\\"\\n loading=\\"lazy\\"\\n />\\n <div class=\\"swiper-lazy-preloader\\"></div>\\n </div>\\n </div>\\n</div>\\n\\n
Here’s the demo:
\\nFor more lazy loading options, visit Swiper lazy loading modules.
\\nA scrollbar helps users navigate by providing a draggable handle, allowing them to browse through slides more easily. In Swiper, you can customize the scrollbar settings. Some options Swiper offers include:
\\ndragEl
— Holds the element for the draggable scrollbar handleel
— Holds the element for the scrollbar containerTo set it up, add a container for the scrollbar:
\\n<div class=\\"swiper\\">\\n <div class=\\"swiper-wrapper\\">\\n <!-- Slides --\x3e\\n </div>\\n <!-- Scrollbar --\x3e\\n <div class=\\"swiper-scrollbar\\"></div>\\n</div>\\n\\n
Now, enable the scrollbar
parameter:
const swiper = new Swiper(\'.swiper\', {\\n // Other parameters\\n scrollbar: { // Enables manual scrolling\\n el: \'.swiper-scrollbar\',\\n draggable: true,\\n },\\n});\\n\\n
The scrollbar.el
parameter targets the .swiper-scrollbar
element in the HTML. Setting draggable
to true allows users to drag the scrollbar handle to navigate through slides. For more scrollbar options, visit the documentation on Swiper scrollbar modules:
Looping makes the slider move continuously, while autoplay automatically transitions slides after a set delay. Swiper allows you to customize both features. Some options include:
\\nloop
— Enables looping (default is false
)delay
— Sets the time (in milliseconds) between slide transitions; autoplay is off if not setdisableOnInteraction
— Stops autoplay after user interactions (like swipes), but restarts after each interactionTo configure loop and autoplay:
\\nconst swiper = new Swiper(\'.swiper\', {\\n // Other parameters\\n loop: true,\\n autoplay: {\\n delay: 3000, // Time in milliseconds between transitions\\n disableOnInteraction: false,\\n },\\n});\\n\\n
Setting loop
to true allows the slides to loop continuously. The autoplay setting manages automatic slide changes, where delay
defines the time between changes. Setting disableOnInteraction
to false keeps autoplay running even after users interact with the slides:
Swiper.js works well with modern JavaScript frameworks like React, Vue, and Angular. This section will show you how to use Swiper effectively in React and Vue applications.
\\nTo add Swiper to a React project, first install the swiper
package. You can do this using npm:
npm install swiper\\n\\n
After installation, import and use Swiper components in your React app.
\\nIn React, there are two main components: Swiper
and SwiperSlide
. The Swiper
component acts as the main container for the slider, while SwiperSlide
represents each individual slide that holds the content. Here’s a basic example:
import React from \'react\';\\nimport { Swiper, SwiperSlide } from \\"swiper/react\\";\\nimport \\"swiper/css\\";\\n\\nexport default () => {\\n return (\\n <Swiper>\\n <SwiperSlide>Content 1</SwiperSlide>\\n <SwiperSlide>Content 2</SwiperSlide>\\n <SwiperSlide>Content 3</SwiperSlide>\\n <SwiperSlide>Content 4</SwiperSlide>\\n </Swiper>\\n );\\n};\\n\\n
To add more features like navigation and pagination, install, import, and set up the necessary modules:
\\nimport { Navigation, Pagination } from \'swiper/modules\';\\nimport { Swiper, SwiperSlide } from \'swiper/react\';\\n\\n// Import required styles\\nimport \'swiper/css\';\\nimport \'swiper/css/navigation\';\\nimport \'swiper/css/pagination\';\\n\\nexport default () => {\\n return (\\n <Swiper\\n modules={[Navigation, Pagination]}\\n navigation\\n pagination={{ clickable: true }}\\n >\\n <SwiperSlide>Slide 1</SwiperSlide>\\n {/* ... */}\\n </Swiper>\\n );\\n};\\n\\n
Swiper has React-specific features that improve its use in React projects.
\\nThe main Swiper component serves as the container for your slideshow, supporting standard Swiper options and React-specific properties for enhanced functionality:
\\n<Swiper\\n tag=\\"section\\" // Define a custom HTML element (default: \'div\')\\n onSwiper={(swiper) => { // Access the Swiper instance immediately after initialization\\n console.log(\'Swiper instance:\', swiper);\\n // Store the instance in a ref or state if needed\\n }}\\n // Event handling with proper TypeScript support\\n onSlideChange={(swiper) => {\\n const activeIndex = swiper.activeIndex;\\n console.log(`Slide changed to index ${activeIndex}`);\\n }}\\n>\\n {/* Slides go here */}\\n</Swiper>\\n\\n
The SwiperSlide
component does more than just wrap slides. It also allows you to customize each slide and see their state:
<SwiperSlide\\n tag=\\"div\\" // Customize the slide element type\\n zoom={true} // Enable zoom functionality for this slide\\n virtualIndex={5} // Required only for virtual slides\\n>\\n {/* Slide content */}\\n</SwiperSlide>\\n\\n
Another key feature is the option to show content based on a slide’s state. Check it out below:
\\n<div\\n style={{\\n display: \\"flex\\",\\n justifyContent: \\"center\\",\\n alignItems: \\"center\\",\\n height: \\"100vh\\",\\n }}\\n >\\n <Swiper\\n slidesPerView={3}\\n loop={true}\\n watchSlidesProgress={true}\\n modules={[Navigation, Pagination]}\\n navigation\\n pagination={{ clickable: true }}\\n style={{ width: \\"60%\\", padding: \\"20px\\" }}\\n >\\n {[...Array(5)].map((_, index) => (\\n <SwiperSlide key={index}>\\n {({ isActive, isPrev, isNext, isVisible, isDuplicate }) => (\\n <div\\n style={{\\n padding: \\"20px\\",\\n border: \\"2px solid\\",\\n borderColor: isActive\\n ? \\"blue\\"\\n : isPrev\\n ? \\"green\\"\\n : isNext\\n ? \\"purple\\"\\n : \\"gray\\",\\n opacity: isVisible ? 1 : 0.5,\\n backgroundColor: isDuplicate ? \\"#f8d7da\\" : \\"#e9ecef\\",\\n textAlign: \\"center\\",\\n fontSize: \\"18px\\",\\n }}\\n >\\n <p>Slide {index + 1}</p>\\n <p>{isActive && \\"Active\\"}</p>\\n <p>{isPrev && \\"Previous\\"}</p>\\n <p>{isNext && \\"Next\\"}</p>\\n <p>{isVisible && \\"Visible\\"}</p>\\n <p>{isDuplicate && \\"Duplicate\\"}</p>\\n </div>\\n )}\\n </SwiperSlide>\\n ))}\\n </Swiper>\\n </div>\\n\\n
Swiper provides helpful hooks to control and respond to the slideshow from any part of the component.
\\nThe useSwiper
hook lets you access the Swiper instance from anywhere in the Swiper component:
// SlideController.jsx\\nimport { useSwiper } from \'swiper/react\';\\n\\nfunction SlideController() {\\n const swiper = useSwiper();\\n\\n return (\\n <div className=\\"controls\\">\\n <button onClick={() => swiper.slidePrev()}>Previous</button>\\n <button onClick={() => swiper.slideNext()}>Next</button>\\n </div>\\n );\\n}\\n\\n
The useSwiperSlide
hook gives slide state information to any component within a slide:
// SlideIndicator.jsx\\nimport { useSwiperSlide } from \'swiper/react\';\\n\\nfunction SlideIndicator() {\\n const slide = useSwiperSlide();\\n\\n return (\\n <div className={`indicator ${slide.isActive ? \'active\' : \'\'}`}>\\n {slide.isActive ? \'Active Slide\' : \'Inactive Slide\'}\\n </div>\\n );\\n}\\n\\n
To install Swiper in a Vue project, use npm:
\\nnpm install swiper\\n\\n
Similar to React, Swiper for Vue offers the same components, modules, props, and event listeners. Here’s a basic setup:
\\n<script setup>\\nimport { Swiper, SwiperSlide } from \\"swiper/vue\\";\\nimport \\"swiper/css\\";\\n</script>\\n\\n<template>\\n <Swiper>\\n <SwiperSlide>Slide 1</SwiperSlide>\\n <SwiperSlide>Slide 2</SwiperSlide>\\n <SwiperSlide>Slide 3</SwiperSlide>\\n <SwiperSlide>Slide 4</SwiperSlide>\\n </Swiper>\\n</template>\\n\\n
To use navigation, pagination, and scrollbars in Vue, import and configure the necessary modules:
\\n<template>\\n <swiper\\n :modules=\\"[Navigation, Pagination]\\"\\n navigation\\n :pagination=\\"{ clickable: true }\\"\\n >\\n <!-- Slides --\x3e\\n </swiper>\\n</template>\\n\\n<script>\\nimport { Navigation, Pagination } from \'swiper/modules\';\\nimport { Swiper, SwiperSlide } from \'swiper/vue\';\\n\\nexport default {\\n components: {\\n Swiper,\\n SwiperSlide,\\n },\\n setup() {\\n return {\\n modules: [Navigation, Pagination],\\n };\\n },\\n};\\n</script>\\n\\n
Swiper in Vue supports standard Swiper parameters as props and events handling syntax. Some useful props include:
\\n<template>\\n <Swiper \\n :slides-per-view=\\"3\\"\\n :space-between=\\"50\\"\\n @slideChange=\\"onSlideChange\\"\\n >\\n <SwiperSlide>Slide 1</SwiperSlide>\\n <SwiperSlide>Slide 2</SwiperSlide>\\n </Swiper>\\n</template>\\n\\n<script setup>\\nconst onSlideChange = () => {\\n console.log(\\"Slide changed!\\");\\n};\\n</script>\\n\\n
Swiper Vue.js components are set to be deprecated in future versions of Swiper. For new projects, it’s recommended to use Swiper Element instead. However, the current implementation remains fully functional for existing applications.
\\n\\nSwiper Element is a version of the Swiper designed for web components. It allows you to create sliders and carousels using standard web technology.
\\nThere are two ways to include the Swiper Element in your project. Either by using CDN or npm:
\\nUsing npm:
\\n// Install\\n$ npm install swiper\\n\\n// Register Swiper custom elements\\nimport { register } from \'swiper/element/bundle\';\\nregister();\\n\\n
Using CDN:
\\n<script src=\\"https://cdn.jsdelivr.net/npm/swiper@11/swiper-element-bundle.min.js\\"></script>\\n\\n
With this option, it will be automatically registered, no need to call register()
When you install Swiper Element and call register()
or add a script tag, there are two web components (custom elements) at your disposal: <swiper-container>
and <swiper-slide>
. These components enable you to specify all parameters and the Swiper slide element:
<swiper-container pagination=\\"true\\" navigation=\\"true\\">\\n <swiper-slide>Slide 1</swiper-slide>\\n <swiper-slide>Slide 2</swiper-slide>\\n <swiper-slide>Slide 3</swiper-slide>\\n</swiper-container>\\n\\n
Using Swiper Element in React can be challenging since React doesn’t fully support custom elements. To configure parameters, use attributes on the element or directly assign properties. Complex setups require specific initialization steps, as standard React prop syntax won’t work for all settings:
\\nEvent handling works differently in Swiper Element. Standard React event formats on[Event]
won’t work. Instead, use addEventListener
or manage events via the on
parameter object during setup.
Here’s how to use the Swiper element in React:
\\nimport { useRef } from \'react\';\\nimport { register } from \'swiper/element/bundle\';\\n\\nregister();\\n\\nexport function ReactElement() {\\n const swiperElRef = useRef(null);\\n\\n useEffect(() => {\\n // Handle events...\\n swiperElRef.current.addEventListener(\\"swiperprogress\\", (e) => {\\n const [swiper, progress] = e.detail;\\n console.log(progress);\\n });\\n }, []);\\n\\n return (\\n <swiper-container ref={swiperElRef}>\\n {/* Slides */}\\n </swiper-container>\\n );\\n}\\n\\n
Vue fully supports web components. You can pass attributes as props and listen for custom events:
\\n<template>\\n <swiper-container\\n :slides-per-view=\\"3\\"\\n @swiperslidechange=\\"onSlideChange\\"\\n >\\n {/* Slides */}\\n </swiper-container>\\n</template>\\n\\n<script>\\n import { register } from \'swiper/element/bundle\';\\n\\n register();\\n\\n export default function () {\\n setup() {\\n\\n const onSlideChange = (e) => {\\n console.log(\'slide changed\')\\n }\\n\\n return {\\n onSlideChange,\\n };\\n }\\n }\\n</script>\\n\\n
Swiper.js is a powerful and flexible JavaScript library for creating touch-friendly sliders. In this article, we covered its installation methods—CDN, direct download, and npm—along with key features like navigation, pagination, lazy loading, scrollbars, and autoplay.
\\nWe also explored its integration with frameworks like React and Vue, highlighting essential components and methods. Additionally, we introduced Swiper Element, a modern alternative designed for web applications.
\\nWith its extensive customization options, performance optimizations, and broad framework support, Swiper is a top choice for building dynamic, interactive sliders. Check out its documentation to unlock its full potential.
\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nfetch()
method\\n GET
requests\\n POST
requests\\n Just as the name suggests, the Fetch API is an easy way to fetch resources from a remote or local server through a JavaScript interface.
\\nThis means that the browser can directly make HTTP requests to web servers. It’s made possible by the fetch()
method, which returns a promise, whose response is fulfilled using the Response
object.
The fetch()
method takes a mandatory argument, which is the path to the remote server or source from which we want to fetch data. Once the response is received, it then becomes the developer’s responsibility to decide how to handle the body content and display it in some HTML template, for example.
Below is a somewhat basic fetch()
example that grabs data from a remote server:
fetch(url)\\n .then(\\n //handle the response\\n )\\n .catch(\\n //handle the errors\\n )\\n\\n
The example above uses simple promises to implement the fetch()
method. We specify a URL, and store it in a const
variable. In our case, the remote server URL is random, and only for exemplary purposes.
Over the course of this article, we’ll be fine-tuning this code snippet to show how to make maximum use of the Fetch API when making calls.
\\nfetch()
methodTo follow through with this guide, you’ll need an understanding of basic JavaScript concepts such as promises, a``sync/``a``wait
, and callback functions
For this tutorial, we want to simulate an environment where you would be working with an API. To do that, we’ll use a JSON placeholder, a free and fake API for testing, and it will serve as the means through which we reach our server.
\\nGET
requestsA GET
request is used to retrieve data from a server. By default, all HTTP requests are GET
unless specified otherwise.
For example, if we’re building a to-do list app, we need to fetch and display tasks on the front end. Using JavaScript, we can target an unordered list (ul
) in our HTML and populate it dynamically with the fetched data. Here’s how we can set up the HTML structure:
<ul id=\\"list\\">\\n\\n</ul>\\n\\n
When we enter a to-do list item, it gets stored on our server. To retrieve those items, we need to use the GET
request.
The first thing we’ll do in our JavaScript is get the ul
element from the DOM via its ID so that we can append list items to it later:
const ul = document.getElementById(\\"list\\")\\n\\n
We then go on to store the URL of the API that connects us to the remote server in a variable called URL
:
const url = \\"https://jsonplaceholder.typicode.com/todos\\"\\n\\n
We’ve gotten the variables we need. Now we can get to working with fetch()
. Remember that the fetch()
method takes into account just one parameter: the URL. In this case, we’ll pass in our url
variable:
fetch(url)\\n\\n
This alone won’t give us the data we need since the response isn’t in JSON
format. We need to parse it so that we can work with the data and display it in our HTML. To do this, we use the .json()
method:
fetch(url)\\n .then(response => response.json())\\n\\n
After fetch()
is run, it returns a promise that’s resolved to a Response
object. The .then()
method above is used to process this Response
object. The .json()
method is called on the object, and it returns another promise that resolves to the JSON data we need:
fetch(url)\\n .then(response => response.json())\\n .then(data => {\\n\\n })\\n\\n
The second .then()
above is used to handle the JSON data returned by the previous .then()
. The data
parameter is the parsed JSON data.
Now that we have the data, how do we output it to the HTML template to make it visible on the page?
\\n\\nWe have a ul
element, and we want to display the data as a list of to-do items. For each to-do item that we fetch, we’ll create a li
element, set the text to the item we’ve fetched, and then append the li
to the ul
element.
Here’s how to achieve this result:
\\nfetch(url)\\n .then(response => response.json())\\n .then(data => {\\n data.forEach(todo => {\\n const li = document.createElement(\\"li\\")\\n li.innerText = todo.title\\n ul.appendChild(li)\\n })\\n })\\n\\n
Our complete JavaScript logic should look like so:
\\nconst ul = document.getElementById(\\"list\\")\\nconst url = \\"https://jsonplaceholder.typicode.com/todos\\"\\n\\nfetch(url)\\n .then(response => response.json())\\n .then(data => {\\n data.forEach(todo => {\\n const li = document.createElement(\\"li\\")\\n li.innerText = todo.title\\n ul.appendChild(li)\\n })\\n })\\n\\n
With this, we should see a list of to-do items displayed on the webpage. By default, JSONPlaceholder returns a maximum of 200 to-do items, but we can always play around with our logic to decrease this number. We can get more than just the title of our to-do items. An example of more data we could find might be the status of the task.
\\nPOST
requestsNow that we’ve seen how to get data from the server, we’ll see how to add data to the server. Imagine filling out a form or entering data on a website. This data needs to get to the database somehow, and we usually achieve this using POST
requests.
In our HTML code, we’ll have a small form with an input field and a submit button:
\\n<form id=\\"todo-form\\">\\n <input type=\\"text\\" id=\\"todo-input\\" placeholder=\\"Enter your task here...\\" required>\\n <button type=\\"submit\\">Add To-do</button>\\n</form>\\n\\n
Then over in JavaScript, we want to grab two elements and specify the URL, through which we make requests:
\\nconst form = document.getElementById(\\"todo-form\\");\\nconst input = document.getElementById(\\"todo-input\\");\\n\\nconst url = \\"https://jsonplaceholder.typicode.com/todos\\"\\n\\n
We’ll create an event listener so that the request is made each time we submit the form:
\\nform.addEventListener(\\"submit\\", (event) => {\\n event.preventDefault()\\n\\n})\\n\\n
Data is usually sent to the backend as objects with specific key-value pairs representing the format in which we want to store our data. In our case, we want to store the title and status of a to-do item.
\\nFor example:
\\nconst newToDo = {\\n title: input.value,\\n completed: false,\\n}\\n\\n
Now, we can start setting up our POST
request. Unlike the GET
request, which only requires one parameter, the POST
request needs two. The first is the URL, and the second is an object that includes the method
, body
, and headers
keys:
fetch(url, {\\n method: \\"POST\\",\\n body: JSON.stringify(newTodo),\\n headers: {\\n \\"Content-Type\\": \\"application/json\\",\\n },\\n})\\n\\n
The method
key defines the type of request being made. In this case, it’s set to POST
, indicating that we’re sending data to the server. The body
contains the data, formatted as a JSON string using JSON.stringify(newTodo)
. We’ll cover the headers
in the next section.
These are the basics of a simple [POST request](https://blog.logrocket.com/how-to-make-http-post-request-with-json-body-in-go/)
. Our final JavaScript logic will look something like this:
const form = document.getElementById(\\"todo-form\\");\\nconst input = document.getElementById(\\"todo-input\\");\\n\\nconst url = \\"https://jsonplaceholder.typicode.com/todos\\";\\n\\nform.addEventListener(\\"submit\\", (event) => {\\n event.preventDefault();\\n const newTodo = {\\n title: input.value,\\n completed: false,\\n };\\n\\n fetch(url, {\\n method: \\"POST\\",\\n body: JSON.stringify(newTodo),\\n headers: {\\n \\"Content-Type\\": \\"application/json\\",\\n },\\n });\\n});\\n\\n
Besides GET
and POST
, there are a variety of other operations that you can use when working with data. You can visit the MDN docs to learn more about these requests and how to use them.
So far, our GET
and POST
examples assume everything goes smoothly—but what if they don’t? What happens if the resource doesn’t exist or a network error occurs while sending data?
To handle these cases, we can append a .catch
method to catch network errors and check the response status to handle HTTP errors. Let’s look at how to make our fetch requests more resilient.
Let’s revamp the code above to account for potential errors:
\\nform.addEventListener(\\"submit\\", (event) => {\\n event.preventDefault();\\n const newTodo = {\\n title: input.value,\\n completed: false,\\n };\\n fetch(url, {\\n method: \\"POST\\",\\n headers: {\\n \\"Content-Type\\": \\"application/json\\",\\n },\\n body: JSON.stringify(newTodo),\\n })\\n .then((response) => {\\n if (!response.ok) {\\n throw new Error(`HTTP error! status: ${response.status}`)\\n }\\n console.log(\\"Todo added\\")\\n })\\n .catch((error) => {\\n console.error(\\"Error:\\", error);\\n });\\n});\\n\\n
We use .then
to check if the response is OK—if not, we throw an error with the status code. Then, .catch
handles any fetch errors and logs them to the console.
Proper error handling ensures issues are caught and communicated effectively. HTTP status codes play a key role here, as each one has a specific meaning and requires a different response. Some of the most common status codes include:
\\n200 OK
404 Not Found
500 Internal Server Error
When working with large amounts of data, we can’t always wait to completely fetch all the data from the server before we process it. Streaming is a technique that allows us to process data in chunks. The data is processed this way until it’s all retrieved from the server. This is a way of improving application responsiveness and performance.
\\nLet’s transform our previous GET
request example into one that implements streaming.
We initiate the request to the API and check if the response is OK:
\\nfetch(url)\\n .then(response => {\\n if(!response.ok){\\n throw new Error(`HTTP error! status: ${response.status}`)\\n }\\n\\n })\\n\\n
We need a reader to read the response body in chunks and decode the chunks into text. This decoding is done with the help of TextDecoder()
:
const reader = response.body.getReader()\\nconst decoder = new TextDecoder()\\nlet result = \\"\\"\\n\\n
After reading the first chunk, we process it. If the stream is complete, it logs a message to the console and returns. If not, it decodes the chunk of data, appends it to the result string, parses it as JSON, and appends each to-do item to the list.
\\nIt then reads the next chunk of data and processes it recursively:
\\nreturn reader.read().then(function processText({ done, value }) {\\n if (done) {\\n console.log(\\"Stream complete\\")\\n return\\n }\\n\\n //decode and parse JSON\\n result += decoder.decode(value, { stream: true });\\n const todos = JSON.parse(result)\\n\\n //add each to-do to the list\\n todos.forEach((todo) => {\\n const li = document.createElement(\\"li\\")\\n li.innerText = todo.title\\n list.appendChild(li)\\n })\\n return reader.read().then(processText)\\n})\\n\\n
At the end, we can add a .catch
to account for any errors:
.catch((error) => {\\n console.error(\\"Error\\", error)\\n})\\n\\n
Our final JavaScript file for streaming would look like this:
\\nconst list = document.getElementById(\\"to-do\\");\\nconst url = \\"https://jsonplaceholder.typicode.com/todos\\";\\n\\n\\n fetch(url)\\n .then((response) => {\\n if (!response.ok) {\\n throw new Error(`HTTP error! status: ${response.status}`);\\n }\\n const reader = response.body.getReader()\\n const decoder = new TextDecoder()\\n let result = \\"\\"\\n return reader.read().then(function processText({ done, value }) {\\n if (done) {\\n console.log(\\"Stream complete\\")\\n return\\n }\\n result += decoder.decode(value, { stream: true });\\n const todos = JSON.parse(result)\\n todos.forEach((todo) => {\\n const li = document.createElement(\\"li\\");\\n li.innerText = todo.title\\n list.appendChild(li)\\n })\\n return reader.read().then(processText);\\n })\\n })\\n .catch((error) => {\\n console.error(\\"Error:\\", error);\\n })\\n\\n
In addition to serving and receiving data from the client, the server needs to understand every request it receives. This is possible with the help of headers, which act as metadata and accompany requests. These key-value pairs tell the server what kind of request the client is making, and how to respond to it.
\\nTo understand headers, we must discuss the two categories that exist: fetch request headers and fetch response headers.
\\nAs the name implies, request headers tell the server what kind of request you’re making and may include conditions the server needs to fulfill before responding.
\\nIn our POST
example, we used the Content-Type
header to specify that we were sending JSON data. Another important header is Authorization
, which carries authentication details like tokens or API keys for secure API access. The Accept
header tells the server which data format we prefer in the response.
Some headers, known as forbidden header names, are automatically set by the browser and can’t be modified programmatically. These are called forbidden header names.
\\nWhen the server processes a request, it responds with headers that provide important details about the response.
\\nKey response headers include Status Code
, which indicates whether the request was successful (200 OK
) or encountered an issue (500 Server Error
). Content-Length
specifies the size of the returned data, while Content-Type
reveals the format, such as JSON or HTML.
Some headers function in both requests and responses, like:
\\nCache-Control
: Manages browser caching behavior.Accept-Encoding
: Tells the server which compression formats the client supports.Headers help streamline communication between the client and server, improving response handling. For a deeper dive, check out this full list of headers.
\\nThe Fetch API is the modern standard for making requests in JavaScript, but it’s useful to compare it with older methods like Axios and XMLHttpRequest
to understand its advantages. Here’s a comprehensive comparison:
Feature | \\nFetch API | \\nAxios | \\nXMLHttpRequest | \\n
---|---|---|---|
JSON handling | \\nManual parsing is needed | \\nAutomatic JSON parsing | \\nManual parsing needed | \\n
Error handling | \\nRequires manual checks (e.g., adding .catch() methods and !response.ok ) | \\nBuilt-in error handling | \\nComplex and inconsistent | \\n
Browser support | \\nMordern browsers only | \\nMordern browsers | \\nExcellent browser support including old browsers | \\n
Cancellation of requests | \\nUsing AbortController , this is supported | \\nSupported | \\nDoes not support request cancellation | \\n
Syntax | \\nMakes use of promises, with a short and clean syntax | \\nAlso makes use of promises | \\nUses mostly callbacks and is too verbose | \\n
In this guide, we’ve covered the fundamentals of the Fetch API, from making simple GET
and POST
requests to handling errors, working with headers, and managing streaming responses. While there’s much more to explore, this foundation equips you to confidently use fetch
in JavaScript. Mastering these concepts will help you build more efficient and reliable web applications.
Would you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nconst
enums and reducing runtime overhead\\n const
enums for performance\\n const
enum
when you want to optimize application performance\\n TypeScript enums offer a strong method to group and name associated constants in your applications.
\\nHardcoding user roles, API statuses, or application states can make your code harder to maintain and more prone to errors, especially when developing larger applications. A single typing mistake or minor alteration in your programming code can disrupt all the application’s functions.
\\nTypeScript enums enable you to give clear and organized names to numerical values. You can define strings and numbers only once, then refer to them everywhere in the code.
\\nThis guide explains how enums work, the different types available, and when to use them. It also covers their limitations and when union types or object literals might be the better choice.
\\nAn enum is a special class in TypeScript that allows a developer to define a set of named constants. Since some values in a TypeScript project never change — like user roles or API statuses — storing them as plain strings or numbers can lead to typos and incorrect values.
\\nRather than using scattered raw values, enums let you define them once and reference them across your code.
\\nFor example, when implementing user authentication, an enum can be used to define different user roles:
\\nenum UserRole {\\n Admin = \\"ADMIN\\",\\n Editor = \\"EDITOR\\",\\n Viewer = \\"VIEWER\\"\\n}\\n\\n
Instead of hardcoding role strings throughout your application, you can reference UserRole.Admin
, ensuring consistency and reducing errors.
Similarly, when handling an API response in a REST API, enums can represent different status codes for each response:
\\nenum ApiStatus {\\n Success = 200,\\n NotFound = 404,\\n ServerError = 500\\n}\\n\\n
This makes your code more readable and maintainable by replacing magic numbers with meaningful names.
\\nUnlike regular objects, enums come with type safety. TypeScript prevents assigning values that do not exist in the enum. Here is an example of an enum enforcing correct values:
\\nenum UserRole {\\n Admin = \\"ADMIN\\",\\n User = \\"USER\\",\\n Guest = \\"GUEST\\"\\n}\\n\\nlet role: UserRole = UserRole.Admin; // Correct\\nrole = \\"SuperAdmin\\"; // Type error\\n>\\n
With an object, TypeScript does not stop incorrect values. This means you could mistakenly assign a value that was not originally defined:
\\nconst UserRole = {\\n Admin: \\"ADMIN\\",\\n User: \\"USER\\",\\n Guest: \\"GUEST\\"\\n};\\n\\nlet role = UserRole.Admin; // No issue\\nrole = \\"SuperAdmin\\"; // No Type error\\n\\n
Enums work best when handling fixed sets of values, such as:
\\nAdmin
, User
, Guest
Success
, Error
, Pending
Processing
, Shipped
, Delivered
Mon
, Tues
, etc.Next, we’ll look at the different types of enums and how they work.
\\nEnums in TypeScript can store values as numbers, strings, or a mix of both. Each type works differently and is suited for specific use cases.
\\n\\nBy default, TypeScript assigns numbers to enum members, starting from 0
. If you do not specify values, TypeScript increments them automatically. Here is an example of a numeric enum with default values:
enum Direction {\\n Up, // 0\\n Down, // 1\\n Left, // 2\\n Right // 3\\n}\\n\\n
If you need specific values, you can manually assign them. TypeScript will continue counting from the last assigned number:
\\nenum Status {\\n Success = 1,\\n Failure, // 2\\n Pending // 3\\n}\\n\\n
One unique feature of numeric enums is reverse mapping, which allows you to retrieve the key using its value. This can be useful but may also expose values unexpectedly:
\\nconsole.log(Status.Success); // 1\\nconsole.log(Status[1]); // \\"Success\\"\\n\\n
String enums require manually assigned values, making them easier to read. They also prevent unexpected assignments since TypeScript enforces exact matches. Here is how a string enum is defined:
\\nenum Status {\\n Success = \\"SUCCESS\\",\\n Failure = \\"FAILURE\\",\\n Pending = \\"PENDING\\"\\n}\\n\\n
Unlike numeric enums, they do not allow reverse mapping, but they make comparisons clearer. For example, you can check an API response like this:
\\nfunction checkStatus(status: Status) {\\n if (status === Status.Success) {\\n console.log(\\"Everything is good\\");\\n }\\n}\\n\\n
TypeScript allows enums that mix numbers and strings, but this is rarely a good idea. It makes checking values harder and can introduce unnecessary complexity. Here is an example of a heterogeneous enum:
\\nenum Mixed {\\n Yes = \\"YES\\",\\n No = 0\\n}\\n\\n
Since combining types makes code harder to read and debug, it is best to avoid heterogeneous enums unless absolutely necessary.
\\nNext, we’ll look at key features of enums and how they affect performance.
\\nTypeScript enums come with a few built-in behaviors that set them apart from regular objects. Some of these can be useful, while others might cause unexpected issues if not understood properly.
\\nNumeric enums allow reverse mapping, meaning you can look up both the value from the key and the key from the value. Here is how it works:
\\nenum Status {\\n Success = 1,\\n Failure = 2,\\n Pending = 3\\n}\\n\\nconsole.log(Status.Success); // 1\\nconsole.log(Status[1]); // \\"Success\\"\\n\\n
This can help with debugging, but also makes all enum values visible at runtime, which may not be ideal for sensitive data. String enums do not support reverse mapping, so looking up a key from a value will return undefined
:
enum Status {\\n Success = \\"SUCCESS\\",\\n Failure = \\"FAILURE\\"\\n}\\n\\nconsole.log(Status[\\"SUCCESS\\"]); // undefined\\n\\n
Enum values can be constant (fixed at compile time) or computed (determined at runtime). Here is an example where some values are set dynamically:
\\nenum MathValues {\\n Pi = 3.14, // Constant\\n Random = Math.random(), // Computed\\n Length = \\"Hello\\".length // Computed\\n}\\n\\n
If an enum has a computed member, TypeScript requires any members after it to have explicit values. Here is an example of this rule in action:
\\nenum Example {\\n First = 1,\\n Second = Math.random(), // Computed\\n Third = 3 // This must be explicitly assigned\\n}\\n\\n
const
enums and reducing runtime overheadBy default, enums generate JavaScript objects, increasing file size. const
enums prevent this by replacing enum references with their values at compile time. Here is how a const
enum
is defined:
const enum Direction {\\n Up,\\n Down,\\n Left,\\n Right\\n}\\n\\nlet move = Direction.Up;\\n\\n
When compiled, this becomes:
\\nlet move = 0;\\n\\n
Since const
enums do not create objects, they cannot be used for runtime operations like key lookups.
Next, we’ll look at how enums can be used in real-world applications.
\\nTypescript enums help keep fixed values organized and consistent across an application. They are useful for managing user roles, API responses, and application states without relying on raw strings or numbers.
\\nUser roles often define what a person can or cannot do in an application. Instead of checking raw strings, enums make these roles clear and prevent mistakes. Here is how user roles can be structured using an enum:
\\nenum UserRole {\\n Admin = \\"ADMIN\\",\\n User = \\"USER\\",\\n Guest = \\"GUEST\\"\\n}\\n\\nfunction checkAccess(role: UserRole) {\\n if (role === UserRole.Admin) {\\n console.log(\\"Access granted: Full permissions\\");\\n } else if (role === UserRole.User) {\\n console.log(\\"Access granted: Limited permissions\\");\\n } else {\\n console.log(\\"Access denied\\");\\n }\\n}\\n\\ncheckAccess(UserRole.Admin); // \\"Access granted: Full permissions\\"\\n\\n
Since the roles are predefined, there is no risk of typos like “admin”
instead of “ADMIN”
.
APIs return status codes to indicate whether a request was successful or failed. Using an enum makes it clear what each status represents. Here is how an API response status can be defined:
\\nenum ApiResponseStatus {\\n Success = 200,\\n NotFound = 404,\\n ServerError = 500\\n}\\n\\nfunction handleResponse(status: ApiResponseStatus) {\\n if (status === ApiResponseStatus.Success) {\\n console.log(\\"Request was successful\\");\\n } else if (status === ApiResponseStatus.NotFound) {\\n console.log(\\"Resource not found\\");\\n } else {\\n console.log(\\"Something went wrong on the server\\");\\n }\\n}\\n\\nhandleResponse(ApiResponseStatus.Success); // \\"Request was successful\\"\\n\\n
Instead of remembering that 200
means “success” or 404
means “not found,” the enum makes these codes readable and easy to update in one place if needed.
Applications often switch between different states, such as loading, ready, or error. Using an enum keeps these states well-defined and prevents unexpected values. Here is how an enum can represent application states:
\\nenum AppState {\\n Loading,\\n Loaded,\\n Error\\n}\\n\\nlet currentState: AppState = AppState.Loading;\\n\\nfunction updateState(state: AppState) {\\n if (state === AppState.Loading) {\\n console.log(\\"App is loading...\\");\\n } else if (state === AppState.Loaded) {\\n console.log(\\"App is ready\\");\\n } else {\\n console.log(\\"Something went wrong\\");\\n }\\n}\\n\\nupdateState(currentState); // \\"App is loading...\\"\\ncurrentState = AppState.Loaded;\\nupdateState(currentState); // \\"App is ready\\"\\n\\n
This approach keeps state management structured, prevents invalid values, and makes the code easier to maintain.
\\nNext, we’ll look at some of the limitations of enums and when they might not be the best choice.
\\nEnums are useful, but they come with some trade-offs. They add extra JavaScript code, can expose internal values, and may not always work well with APIs. These issues can affect performance, security, and type safety.
\\nUnlike other TypeScript features that disappear after compilation, enums turn into JavaScript objects. This increases file size and runtime overhead.
\\nHere is a simple enum in TypeScript:
\\nenum Direction {\\n Up,\\n Down,\\n Left,\\n Right\\n}\\n\\n
After compilation, this turns into:
\\nvar Direction;\\n(function (Direction) {\\n Direction[(Direction[\\"Up\\"] = 0)] = \\"Up\\";\\n Direction[(Direction[\\"Down\\"] = 1)] = \\"Down\\";\\n Direction[(Direction[\\"Left\\"] = 2)] = \\"Left\\";\\n Direction[(Direction[\\"Right\\"] = 3)] = \\"Right\\";\\n})(Direction || (Direction = {}));\\n\\n
For large projects, this extra code adds up. To avoid this, const enum
removes the object entirely.
Here is how a const
enum
works:
const enum Direction {\\n Up,\\n Down,\\n Left,\\n Right\\n}\\n\\nlet move = Direction.Up;\\n\\n
By using const enum
, the compiled output is much cleaner and more efficient. However, this approach has limitations, such as not being able to dynamically reference enum values at runtime.
Numeric enums allow reverse mapping, meaning their values can be accessed both ways. This can expose sensitive values in front-end applications. Here is an example:
\\nenum UserRole {\\n Admin = 1,\\n User = 2,\\n Guest = 3\\n}\\n\\nconsole.log(UserRole[1]); // \\"Admin\\" \\n\\n
An attacker inspecting the JavaScript output could list all roles. String enums do not allow reverse mapping, making them a safer choice.
\\nWhile enums prevent typos, numeric enums allow any number to be assigned, even if it is not defined. Here is what that looks like:
\\nenum Direction {\\n Up,\\n Down\\n}\\n\\nlet move: Direction = 10; // No TypeScript error\\n\\n
This defeats the purpose of type safety. String enums do not have this issue, as only valid values can be assigned.
\\nNext, we’ll look at better alternatives to enums and when to use them.
\\nWhile enums help organize constants, union types, object literals, and const
enums can sometimes be a better choice.
Union types enforce fixed values without adding extra JavaScript.
\\nHere is how they work:
\\ntype Status = \\"Success\\" | \\"Failure\\" | \\"Pending\\";\\nlet response: Status = \\"Success\\"; // Allowed\\nresponse = \\"Error\\"; // Type error\\n\\n
This example shows how union types restrict response
to only the specified values, catching invalid assignments at compile time.
Object literals act like enums but allow dynamic behavior and better optimization. Here is an example:
\\nconst Status = { Success: \\"SUCCESS\\", Failure: \\"FAILURE\\", Pending: \\"PENDING\\" } as const;\\nlet response = Status.Success; // Works\\nresponse = \\"Error\\"; // Type error\\n\\n
This demonstrates how object literals with a const
create immutable, type-safe constants that can be used like enums.
const
enums for performanceconst
enums remove the object wrapper and inline values directly, reducing file size. Here is how they work:
const enum Direction { Up, Down, Left, Right }\\nlet move = Direction.Up; // Compiles to: let move = 0;\\n\\n
This example highlights how const enum
eliminates runtime objects by replacing enum references with their literal values during compilation.
When working with TypeScript enums, following best practices makes your code type safe and easy to maintain. It also improves the performance of your applications. Here are five key best practices to follow:
\\nString enums reduce errors by deterring accidental assignments and making problems easier to debug. They provide more readable and predictable values compared to numeric enums.
\\nconst
enum
when you want to optimize application performanceIf an enum’s values do not need to be checked at runtime, use const enum
to reduce compiled JavaScript size. This approach inlines enum values directly, eliminating runtime overhead.
Putting string and number values in an enum makes it harder to understand and can produce unexpected results. Stick to homogeneous enums (all strings or all numbers) for clarity and consistency.
\\nWhen you require flexible value storage, use other methods like objects or union types instead of enums. Enums are best suited for fixed, unchanging sets of values.
\\nFor simple scenarios, union types offer improved type safety with minimal JavaScript overhead. They are a lightweight alternative to enums when dealing with a small set of fixed values.
\\nIf you need a fixed set of values that won’t change, enums help keep your code structured and prevent accidental mistakes. String enums are usually the best choice since they are readable and avoid issues like reverse mapping. When enums start to feel unnecessary, union types or object literals might be the better way to go.
\\nChoose what works best for your project. Union types keep things simple and do not add extra JavaScript. Object literals give you more flexibility, while const
enums help reduce file size. There is no one-size-fits-all solution, so use what makes your code easier to manage.
Would you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nTo delete a local Git branch, you can use the command git branch -d branch_name
for merged branches or git branch -D branch_name
to force delete branches with unmerged changes.
For remote branches, the commands git push origin -d branch_name
or git push origin -D branch_name
can forcefully delete remote branches that have unmerged changes. These basic commands handle most of the branch deletion needs that developers encounter in their daily workflow.
Beyond these fundamental commands, this guide also explores more advanced branch management techniques, including the deletion of multiple branches at once, pruning remote branches with git remote prune origin
, and recovering accidentally deleted branches using git reflog
. We’ll also cover common errors you might encounter during branch deletion, their solutions, and best practices to maintain a clean, well-organized Git repository.
Git branches are fundamental to modern software development, allowing teams to work on multiple features and fixes at the same time without interfering with each other’s work. While creating branches is a daily part of software development, knowing how to clean them up is equally important.
\\nThis is because as a project progresses, its repository can become cluttered with merged or outdated branches, leading to confusion for team members. Just as a well-organized workspace improves productivity, a clean repository with properly managed branches helps maintain clarity and efficiency in your development workflow.
\\nWhether you’re working solo or as part of a team, this guide will help you learn the best practices for branch management, how to safely delete both local and remote Git branches, some of the common pitfalls to avoid when removing branches and how to recover from accidental branch deletions.
\\nThere are two types of branches in Git:
\\nWhen you create a new branch in Git, it creates a local branch for you by default. If you want to share that branch with others, you have to push it to a remote repository. Pushing a local branch to a remote repository creates a remote branch of the local branch with the same name.
\\nDeleting branches is a common practice in Git, and it’s essential to keep your repository clean and organized. Here are some scenarios where you might want to delete a branch:
\\nBefore deleting a branch, you must ensure that you’re not deleting any work that you might need in the future. Here are some safety considerations to keep in mind when deleting branches:
\\nNow that you understand the importance of branch management and the safety considerations when deleting branches, let’s dive into the methods for deleting branches in Git.
\\nWhen deleting a local branch, it’s important to know that the remote branch is not deleted automatically. You should also delete the remote branch separately if you no longer need it. The same goes for a remote branch; deleting it does not delete its local branch.
\\nTo delete a local branch in Git, you can use the git branch
command with the -d
flag followed by the branch name. The -d
flag stands for “delete” and is used to delete the specified branch:
git branch -d branch_name\\n\\n
For example, to delete a local branch named feature-branch
, you would run the following command:
git branch -d feature-branch\\n\\n
You can also use the flag --delete
instead of -d
to delete a local branch:
git branch --delete branch_name\\n\\n
When deleting a branch with unmerged changes, Git will throw an error to prevent accidental data loss. However, if you’re sure you want to delete the branch, you can use the -D
flag, which stands for “force delete,” to delete it:
git branch -D branch_name\\n\\n
If you prefer to use the --delete
flag to delete the branch, you can do so by including the --force
flag. This will force the deletion of the branch:
git branch --delete --force branch_name\\n\\n
To delete a remote branch in Git, you can use the git push
command with the --delete
flag or -d
flag followed by the remote branch name:
git push origin --delete branch_name\\n# or\\ngit push origin -d branch_name\\n\\n
For example, to delete a remote branch named feature-branch
, you would run the following command:
git push origin --delete feature-branch\\n\\n
The origin
in the command refers to the remote repository where the branch is located.
\\n\\n
To delete a remote branch with unmerged changes, you can use the --force
flag with the git push
command to force the deletion of the branch:
git push origin --delete --force branch_name\\n\\n
In a case where you need to delete multiple branches at once, you can use the git branch
command with the -d
flag followed by the branch names separated by a space:
git branch -d branch_name_1 branch_name_2 branch_name_3\\n\\n
This will delete all the specified branches, one by one in a single command. To forcefully delete multiple branches, you can use the -D
flag instead of -d
:
You can also write a script to delete multiple branches that match a specific pattern simultaneously. For example, to delete all branches that start with feature-
, you can run the following command:
git branch | grep \'feature-\' | xargs git branch -d\\n\\n
To delete all branches, excluding a particular branch, you can run the following:
\\n# Delete all branches, excluding the main branch\\ngit branch | grep -v ‘main’ | xargs git branch -d\\n\\n
To delete all branches, excluding multiple branches, you can run:
\\n# Delete all branches, excluding main and develop\\ngit branch | grep -vE \'main|develop\' | xargs git branch -d\\n\\n
To delete only merged branches or only unmerged branches, you can use the flags --merged
and --no-merged
, respectively:
# Delete only merged branches, excluding the main branch\\ngit branch --merged | grep -v \'main\' | xargs git branch -d\\n\\n# Delete only unmerged branches\\ngit branch --no-merged | xargs git branch -d\\n\\n
To delete multiple remote branches, you must include the -r
flag in the git branch
command, followed by the remote branch names separated by a space:
# Delete multiple remote branches\\ngit branch -r -d branch_name_1 branch_name_2 branch_name_3\\n# Force delete multiple remote branches\\ngit branch -r -D branch_name_1 branch_name_2 branch_name_3\\n# Delete all remote branches, excluding the main branch\\ngit branch -r | grep -v ‘main’ | xargs git branch -d\\n# Delete all remote branches excluding main and develop\\ngit branch -r | grep -vE \'main|develop\' | xargs git branch -d\\n# Delete only merged remote branches, excluding the main branch\\ngit branch -r --merged | grep -v \'main\' | xargs git branch -d\\n# Delete only unmerged remote branches\\ngit branch -r --no-merged | xargs git branch -d\\n\\n
The command line is not the only way to delete branches in Git. You can also delete branches using Git GUI tools, such as GitKraken, Sourcetree, or GitHub Desktop. These tools provide a visual interface that makes it easy to manage branches, including deleting them.
\\nYou can also delete branches on GitHub or GitLab by navigating to the branches tab in your repository and selecting the branch you want to delete. From there, you can delete the branch with a single click.
\\nYou can also create actions or write scripts to delete branches automatically after they have been merged.
\\nEarlier, I mentioned that when you delete a local branch, the remote branch is not deleted automatically. If you have deleted several local branches, you may want to delete their corresponding remote branches. This process is known as pruning remote branches.
\\nTo view existing remote branches of local branches that have been deleted, use the git fetch
command with the --prune
flag:
git fetch --prune\\n\\n
This will list all of them out in the terminal. You can then delete them with the following command:
\\ngit remote prune origin\\n\\n
Don’t panic if you accidentally delete a branch! Git has a built-in mechanism to help you recover deleted branches. It keeps a record of all your branches’ commit history for a period of time, including the deleted ones. This allows you to recover a deleted branch and its commit history if needed.
\\nTo recover a deleted branch, you can use the git reflog
. The git reflog
command shows a log of all the actions you’ve taken in the repository and on all your branches, including deleted branches.
Run the command:
\\ngit reflog\\n\\n
Then, from the output of the git reflog
command, you can identify the commit hash of the branch before it was deleted. You can then use the git checkout
command to recover the deleted branch:
git checkout -b branch_name commit_hash\\n\\n
For example, to recover a deleted branch named feature-branch
with the commit hash abc123
, you would run the following command:
git checkout -b feature-branch abc123\\n\\n
When deleting branches in Git, you may encounter some errors. Here are some of the most common errors and their solutions:
\\nerror: The branch branch_name\'
is not fully merged
— This error occurs when you try to delete a branch with unmerged changes. To resolve this error, use the -D
flag to force delete the brancherror: Branch
\'branch_name\'
not found
— This error occurs when you try to delete a branch that does not exist. Make sure you’re using the correct branch name and that the branch exists before deleting iterror: unable to delete
\'branch_name\': remote ref does not exist
— This error occurs when you try to delete a remote branch that does not exist. Make sure you’re using the correct remote branch name and that the remote branch exists before deleting iterror: Cannot delete branch
\'branch_name\'
checked out at
\'/path/to/branch\'
— This error occurs when you try to delete the branch you’re currently on. To resolve this error, switch to a different branch before deleting the branchTo maintain a clean and organized Git repository, it’s important to follow best practices for branch management. Here are some best practices to consider when managing branches in Git:
\\nfeature/add-new-feature
or bugfix/fix-bug-123
Proper branch management is essential for maintaining a healthy Git repository and improving your development workflow. In this guide, you’ve learned how to safely delete both local and remote Git branches and how to recover from accidental branch deletions. Regular cleanup of obsolete branches helps keep your workflow efficient and your repository organized.
\\nBy following the steps and best practices outlined in this guide, you can ensure that your repository remains clean and clutter-free, making it easier for you and your team to collaborate effectively. Remember to always check for unmerged changes, backup important changes, and communicate with your team before deleting branches to avoid accidental data loss.
\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nreact-scripts serves as the configuration and build tooling layer for React applications created with Create React App. At its core, react-scripts abstract away the complex configuration required for modern JavaScript applications, particularly around webpack, Babel, ESLint, and testing setups. This abstraction allows developers to focus on writing application code rather than spending time on build configuration.
\\nSpecifically, react-scripts provides:
\\nBy handling these aspects, react-scripts enables developers to follow the “convention over configuration” principle, with sensible defaults that work for most React applications. This standardization also helped create consistency across React projects in the ecosystem during the peak popularity of Create React App.
\\nHowever, as web development tooling has evolved, some limitations of react-scripts have become apparent, particularly around build performance and flexibility, leading to the rise of alternatives that we’ll explore later in this article.
\\nEditor’s note: This article was last updated by Ikeh Akinyemi in March 2025 to update information around the current use of react-scripts, discuss the alternative Create React App, and address common challenges associated with using react-scripts.
\\nThe React ecosystem has evolved significantly since Create React App (CRA) was introduced as the go-to solution for bootstrapping React applications. In 2022, CRA and its underlying react-scripts package were the standard for quickly starting React projects without the configuration headaches. However, the landscape has shifted dramatically.
\\nOver the years, many existing projects heavily relied on react-scripts — but that isn’t the case anymore. Newer React applications are increasingly porting to alternative tooling like Vite, which offers significant performance improvements and a more modern development experience.
\\nDespite this shift, understanding react-scripts remains valuable for maintaining legacy projects, contributing to established codebases, or making informed decisions about migrating to newer toolchains.
\\nIn the past, creating a React app was a painful process. You had to slog through a lot of configuration, especially with webpack and Babel, before you could get your hands dirty and develop something meaningful.
\\nFortunately, Create React App was introduced as a solution, offering a handy module that comes with an outstanding configuration, and a scripts command called react-scripts that makes it much easier to build React applications.
\\nThe aim for this guide is to provide you with a comprehensive overview of react-scripts, its functionality, current status in the React ecosystem, and alternatives for modern React development. Whether you’re maintaining a CRA project or considering a migration to newer tools, this article will help you with the knowledge needed to navigate the changing React.
\\nWe’ll give an overview of react-scripts, compare a few different types of scripts, and describe how Create React App dramatically streamlines the React development process. Let’s dive in!
\\nIn programming, a script is a list of instructions that dictates what to do to another program; React is no exception. Create React App ships with four main scripts, each of which we’ll explore later. But for now, we’ll focus on where to find these scripts.
\\nFirst, create a new React app with the following command to find predefined scripts:
\\nnpx create-react-app my-app\\n\\n
The above command creates a new React app with cra-template and all required configurations.
\\nEvery configuration required for the React app comes through the react-scripts package. Now, check the package.json
file of the newly created project.
In React apps, scripts are located in the package.json
file’s script
section, as shown below:
\\"scripts\\": {\\n \\"start\\": \\"react-scripts start\\",\\n \\"build\\": \\"react-scripts build\\",\\n \\"test\\": \\"react-scripts test\\",\\n \\"eject\\": \\"react-scripts eject\\"\\n }\\n\\n
In the previous JSON snippet, the package.json
file has some default scripts, but it’s still possible to edit them. You can execute these scripts with your preferred Node package manager CLI.
As you can see, a fresh React app comes with four scripts that use the package react-scripts. Now that we know what a script is and where to find them, let’s dive into each one and explain what it does to a React app.
\\n\\nstart
React uses Node.js on development to open the app on http://localhost:3000
. The start
script enables you to start the webpack development server.
You can run the start
script command on the terminal with either npm
or yarn
:
yarn start\\nnpm start\\n\\n
This command will not only start the development server, but it will also react and display the latest version each time a change occurs with the webpack’s Hot Module Replacement (HMR) feature. In addition, it will show lint errors on the terminal if it fails to start the server in the form of meaningful error messages.
\\ntest
Create React App uses Jest as a test runner. The test
script enables you to launch the test runner in interactive watch mode that lets you control Jest with your keyboard.
The test
script can be run on the terminal with the following commands:
yarn test\\nnpm test\\n\\n
The default React template comes with one predefined test case for the sample application interface. Open the src/App.test.js
file and find the following sample test case:
test(\'renders learn react link\', () => {\\n render(<App />);\\n const linkElement = screen.getByText(/learn react/i);\\n expect(linkElement).toBeInTheDocument();\\n});\\n\\n
The above test cases check whether the app rendered learn react
(case insensitive) or not. Enter the npm test
(or yarn test
) command and press the A key to run all test cases, as shown below:
I won’t dive too deep into testing React apps, but keep in mind that any file with .test.js
or .spec.js
extensions will be executed when the script is launched.
build
React is modular, which is why you can create several files or components if you wish. These separate files need to be merged or bundled into one to be precise. That’s one of the major benefits of the build
script.
The other is performance; as you know, development mode is not optimized for production environments. And React uses the build
script to ensure that the finished project is bundled, minified, and optimized with best practices for deployment.
The script can be run with the following commands:
\\nyarn build\\nnpm run build\\n\\n
After running the build
script, you can find all deployable optimized static resources inside the build
directory.
There are some additional options that can be passed to the build
script. For example, you can use the --stats
option to generate a bundle stats file that you can visualize with the webpack-bundle-analyzer tool.
See the docs for a deeper dive on how to enhance your build
script.
eject
The Create React App documentation characterizes this script as a “one-way operation” and warns that “once you eject, you can’t go back!” Create React App comes with an excellent configuration that helps you build your React app with the best practices in mind to optimize it.
\\nHowever, we may have to customize the pre-built react-scripts with additional configurations in some advanced scenarios. The eject
script gives you full control over the React app configuration. For example, you can customize the webpack or Babel configuration according to a specific need by ejecting the React app.
Running the eject
script will remove the single build dependency from your project. That means it will copy the configuration files and the transitive dependencies (e.g., webpack, Babel, etc.) as dependencies in the package.json
file. If you do that, you’ll have to ensure that the dependencies are installed before building your project.
After running the eject
command, it won’t be possible to run it again, because all scripts will be available except the eject
one. Use this command only if you need to. Otherwise, stick with the default configuration. It’s better, anyway.
To run the command on the terminal, type the following command:
\\nyarn eject\\nnpm run eject\\n\\n
Ejecting helps you to customize anything in your React configuration, but ejecting may create different versions of react-scripts. Then, you have to maintain customized react-scripts yourself with every React project. Therefore, creating a react-scripts fork is a better idea to make a reusable custom React app configuration.
\\nYou can use a forked react-scripts module with the following command:
\\nnpx create-react-app my-app --scripts-version react-scripts-fork\\n\\n
The above command scaffolds a new React app by using the react-scripts-fork package as the react-scripts source.
\\nPreconfigured react-scripts don’t typically accept many CLI options for customizing their default behaviors. But, react-scripts let developers do various advanced configurations via environment variables that you can set via the terminal.
\\nFor example, you can change the development server port with the PORT
environment variable, as shown below:
PORT=5000 yarn start\\nPORT=5000 npm start\\n\\n
Also, you can change the default application build directory by setting BUILD_PATH
as follows:
BUILD_PATH=./dist yarn build\\nBUILD_PATH=./dist npm run build\\n\\n
If you want, you can update your existing script definitions with environment variables, too. For example, if you use the following JSON snippet in package.json
, you can always use port 5000
with the start
script:
\\"scripts\\": {\\n \\"start\\": \\"PORT=5000 react-scripts start\\", // port 5000 \\n \\"build\\": \\"react-scripts build\\",\\n \\"test\\": \\"react-scripts test\\",\\n \\"eject\\": \\"react-scripts eject\\"\\n}\\n\\n
All supported environment variables are available in the official documentation.
\\nAs of 2025, react-scripts and Create React App have effectively become outdated tools in the React ecosystem. This isn’t just an opinion; it’s reflected in the project’s maintenance status and community adoption trends.
\\nThe last significant update to react-scripts was released in April 2022. This lack of updates is particularly concerning in the fast-moving JavaScript ecosystem, where dependencies, security patches, and best practices evolve rapidly.
\\nThe official Create React App repository shows minimal activity, with existing issues and pull requests largely unaddressed.
\\nThis stagnation has led to several problems for developers still using react-scripts:
\\nnpx create-react-app
today, you’ll receive multiple vulnerability warnings, most of which remain unfixed due to outdated dependenciesEven before maintenance stopped, react-scripts was already showing performance limitations compared to more modern tooling:
\\nWhile the React team hasn’t officially deprecated Create React App, their documentation now prominently features alternative approaches. The official React documentation recommends using production-grade frameworks like Next.js, with Vite listed as an option for those wanting a simpler, client-side only solution.
\\nIf you’re considering moving away from react-scripts, the answer is yes, you can remove it—but this requires a migration to an alternative build system. The most popular direct replacement as of 2025 is Vite, which offers significant performance improvements and modern defaults. To learn more about migrating your CRA apps to Vite, visit this blog post.
\\nThe effort of migration comes with substantial benefits. Vite’s development server starts in milliseconds instead of seconds or minutes, and its Hot Module Replacement makes changes appear almost instantly in the browser. You’ll also enjoy smaller bundles through better production optimization, support for the latest JavaScript features through modern defaults, and the peace of mind that comes with regular updates and security patches through active maintenance.
\\nDespite its new available options, many developers still work with react-scripts in existing projects. Here are solutions to some common issues you might encounter:
\\nIf you’ve cloned a React project from a repository and encounter errors when trying to run npm start
, the issue might be that react-scripts isn’t properly listed in the dependencies.
Problem — Error like below:
\\nError: Cannot find module \'react-scripts\'\\n\\n
Solution — Ensure react-scripts is in your package.json:
\\n\\"dependencies\\": {\\n \\"react-scripts\\": \\"5.0.1\\",\\n // other dependencies\\n}\\n\\n
If it’s not there, add it:
\\nnpm install react-scripts --save\\n\\n
By including the --save
flag, npm will add react-scripts to your package.json, making the commands available in the future versions of the project.
Some projects may need to use a specific version of react-scripts for compatibility reasons.
\\nProblem — You’re encountering errors like:
\\nInvalid options object. Dev Server has been initialized using an options object that does not match the API schema\\n\\n
Solution — Downgrade to a specific version using:
\\n# For npm\\nnpm uninstall react-scripts\\nnpm install --save [email protected]\\n\\n# For yarn\\nyarn remove react-scripts\\nyarn add [email protected]\\n\\n
Alternatively, you can directly edit your package.json:
\\n\\"dependencies\\": {\\n \\"react-scripts\\": \\"4.0.3\\",\\n // other dependencies\\n}\\n>\\n
Then run npm install
or yarn
to update your node_modules.
React-scripts may have issues with certain Node.js versions, especially newer ones.
\\nProblem — You’re encountering cryptic errors after upgrading Node.js
\\nSolution — Use a Node version manager like nvm (or nvm-windows) to switch to a compatible Node version:
\\n# Install Node.js 16.x (LTS when react-scripts 5.0.1 was released)\\nnvm install 16\\nnvm use 16\\n\\n
This allows you to use different Node versions for different projects without reinstallation.
\\nWhen building larger applications, you might encounter memory issues.
\\nProblem — Fatal error like below:
\\nFATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed\\n\\n
Solution — Increase the memory limit for Node.js:
\\n# For Windows (CMD)\\nset NODE_OPTIONS=--max_old_space_size=4096\\nnpm run build\\n\\n# For Windows (PowerShell)\\n$env:NODE_OPTIONS=\\"--max_old_space_size=4096\\"\\nnpm run build\\n\\n# For Mac/Linux\\nNODE_OPTIONS=--max_old_space_size=4096 npm run build\\n\\n
You can add this to your package.json scripts for convenience:
\\n\\"scripts\\": {\\n \\"build\\": \\"react-scripts build\\",\\n \\"build:high-memory\\": \\"cross-env NODE_OPTIONS=--max_old_space_size=4096 react-scripts build\\"\\n}\\n\\n
As React evolves beyond version 18, compatibility issues with older react-scripts versions may increase.
\\nProblem — Features from newer React versions don’t work with your CRA setup.
\\nSolution — You have three options:
\\nFor temporary fixes with CRACO:
\\nnpm install @craco/craco --save-dev\\n\\n
Then update your package.json:
\\n\\"scripts\\": {\\n \\"start\\": \\"craco start\\",\\n \\"build\\": \\"craco build\\",\\n \\"test\\": \\"craco test\\"\\n}\\n\\n
Create a craco.config.js
file in your project root with your customizations.
I hope this guide sheds enough light on the significant changes since Create React App and react-scripts simplified the React application bootstrapping process. Not only does the app come with useful scripts that can help make any developer’s life easier, but some commands come with flexible options that enable you to fit the scripts to the unique needs of your project.
\\nWhile react-scripts served the community admirably for many years, its lack of updates since 2022 has left it increasingly out of step with modern development practices and the evolving React ecosystem. So for your next React project, check out the suite of Vite project setups that might serve your use case.
\\nAs React itself continues to evolve, the tooling around it will inevitably change as well. By staying informed about these changes and remaining open to new approaches, you’ll be well-positioned to build fast, maintainable, and future-proof React applications, regardless of the build tools you choose to employ.
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nLarge language models (LLMs) have gained immense popularity, with chat interfaces often assumed to be the default way to interact with them. This article explores why chatbots became the norm, the problems they create for developers, and alternative, more effective solutions.
\\nChatbots emerged largely because early LLM services used conversational formats, which appealed to users by mimicking human-like interactions. A notable example is OpenAI, which promoted its services by releasing ChatGPT, a widely publicized chat interface to its powerful LLM.
\\nThe success of ChatGPT reinforced the idea that chat-based interaction was the most intuitive and effective way to harness LLM capabilities. However, while chat works well in certain contexts, there are many scenarios where a different interface would offer a more optimal experience.
\\nFor example, LLMs can function as backend components where most interactions are handled by a frontend interface, such as a dashboard, web app, or mobile app, without relying on a chat-based system. This is the case in a personalized recommendation engine, where an LLM analyzes user preferences and behavior to generate tailored recommendations for products (e.g., a trip to Italy), content (e.g., recipes), or resources (e.g., scientific papers) without requiring a chat interface.
\\nSimilarly, in the business domain, an LLM can power a business intelligence dashboard, interpreting and summarizing data, generating insights, and providing explanations for visualizations (e.g., charts and graphs). Users can interact with the system simply by clicking buttons or selecting data points, rather than engaging in a chat.
\\nMany developers default to chat interfaces even for tasks better suited to other UI types — such as dropdown menus, interactive dashboards, and command line interfaces — resulting in inefficient, overly complex user experiences.
\\nFor example, a simple Google search for “chat with PDF” reveals countless services offering the ability to “chat” with a document:
\\nThe concept is enticing and straightforward: find a clever way to feed document text into a prompt — perhaps using a retrieval-augmented generation (RAG) architecture like LlamaIndex — and allow users to ask questions as if they were chatting with the document.
\\nHowever, chat-based interactions create certain expectations. Users assume real-time responses, which can significantly strain hardware resources. Forcing chat interactions complicates development, negatively impacts the user experience, and introduces performance inefficiencies.
\\nIn many cases, simpler and more direct interfaces would be more suitable, reducing complexity and optimizing system performance.
\\nThe “chat with PDF” use case is part of a broader trend: “chat with your data.” While this model has its advantages, it is not always sustainable as a primary solution.
\\nChat interfaces excel in situations involving unstructured data or open-ended questions, providing users with flexibility and opportunities for exploration. LLMs have the potential to improve UI designs by introducing smart features such as form completion, dynamic search, and data insights without forcing interactions into a chat format.
\\nThe figure below shows the “chat with your data” model. Unstructured data, represented by a cloud, is fed into an “?AG” block:
\\nThis representation is used to illustrate various architectures that can augment generation, such as retrieval augmented generation (RAG), cache augmented generation (CAG), and the more general knowledge augmented generation (KAG). The symbol “?AG” encompasses all these possible variations of augmented generation approaches. The user (the smiley icon) interacts by using a chat-based interface with the model.
\\nI propose a different model. The idea is to automate most interactions while allowing users to refine outputs through selective dialogue:
\\nThe successive phase refines the results by chatting with specific sections of the artifact.
\\n\\nMicrosoft Copilot in Visual Studio Code is a successful example of this hybrid model. Developers work on their code, then ask Copilot to generate specific sections. The generated code varies in quality depending on the language and framework, but users can engage in targeted conversations with specific sections as needed. This aligns with the information retrieval mantra: “Overview first, zoom and filter, then details on demand.”
\\nIn the following example, the developer asks Copilot to expand the selected lines of code:
\\nThe information retrieval mantra, formulated by Ben Schneiderman, is a principle widely used in user interface and data visualization design. It consists of three main steps:
\\nThis approach helps users navigate complex datasets efficiently, moving from a high-level understanding to detailed information as needed.
\\nFor LLM integration, we propose a similar mantra:
\\n“Draft first, refine through dialogue, then perfect on demand.”
\\nThis approach balances automation with user-driven interaction, enabling efficient initial content creation while allowing thoughtful customization through guided conversation. However, it also presents challenges, as we are experiencing a paradigm shift in capabilities. Developers invest significant effort in controlling and limiting an LLM’s tendency to generate answers—even when data is incomplete or the response is uncertain.
\\nTo better regulate an LLM’s inclination to always provide an answer, a useful approach is to give users the ability to accept or reject even the smallest intervention. For example, in the following figure, Microsoft Copilot in Visual Studio Code prompts users to explicitly accept or decline a proposed solution. This subtle UI integration allows users to leverage an LLM while maintaining full control over every interaction:
\\nA key benefit of this model is that the first phase occurs asynchronously: the LLM can query the augmentation component multiple times before returning a response. Users then review the output and engage in minimal, focused chat interactions for refinements. This reduces token exchange, improves efficiency, and maintains a smooth user experience.
\\nWhile chat interfaces have become the default for interacting with LLMs, they are not always the most efficient or effective solution. Overreliance on chat can introduce unnecessary complexity, strain resources, and limit the user experience.
\\nBy adopting a hybrid approach — automating initial drafts and using chat-based refinement only when necessary — we can optimize both performance and usability.
\\nThis model balances LLM automation with user-driven refinement, leading to more flexible and efficient workflows, especially in contexts where structured interactions are preferable. Rethinking LLM integration beyond chat will enable developers to create more effective, user-friendly experiences.
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nAbortController
API\\n AbortController
?\\n AbortController
in Node.js\\n AbortController
with fs.readFile
\\n AbortSignal
\\n AbortSignal
to time out async operations\\n AbortSignal
to cancel multiple operations\\n AbortSignals
\\n AbortController
and AbortSignal
\\n This tutorial will offer a complete guide on how to use the AbortController
and AbortSignal
APIs in both your backend and frontend. In our case, we’ll focus on Node.js and React.
AbortController
APIThe AbortController
API became part of Node in v15.0.0. It is a handy API for aborting some asynchronous processes, similar to the AbortController
interface in the browser environment.
You need to create an instance of the AbortController
class to use it:
const controller = new AbortController();\\n\\n
An instance of the AbortController
class exposes the abort
method and the signal
property.
\\nInvoking the abort
method emits the abort
event to notify the abortable API watching the controller about the cancellation. You can pass an optional reason for aborting to the abort
method. If you don’t include a reason for the cancellation, it defaults to the AbortError
.
To listen for the abort
event, you need to add an event listener to the controller’s signal
property using the addEventListener
method so that you run some code in response to the abort
event. An equivalent method for removing the event listener is the removeEventListener
method.
The code below shows how to add and remove the abort
event listener with the addEventListener
and removeEventListener
methods:
const controller = new AbortController();\\nconst { signal } = controller;\\n\\nconst abortEventListener = (event) => {\\n console.log(signal.aborted); // true\\n console.log(signal.reason); // Hello World\\n};\\n\\nsignal.addEventListener(\\"abort\\", abortEventListener);\\ncontroller.abort(\\"Hello World\\");\\nsignal.removeEventListener(\\"abort\\", abortEventListener);\\n\\n
The controller’s signal
has a reason
property, which is the reason you pass to the abort
method at cancellation. Its initial value is undefined
. The value of the reason
property changes to the reason you pass as an argument to the abort
method or defaults to AbortError
if you abort without providing a reason for the cancellation. Similarly, the signal’s aborted
property with an initial value of false
changes to true
after aborting.
Unlike in the above example, practical use of the AbortController
API involves passing the signal
property to any cancelable asynchronous API. You can pass the same signal
property to as many cancelable APIs. The APIs will then wait for the controller’s “signal” to abort the asynchronous operation when you invoke the abort
method.
Most of the built-in cancellation-aware APIs implement the cancellation out of the box for you. You pass in the controller’s signal
property to the API, and it aborts the process when you invoke the controller’s abort
method.
However, to implement a custom cancelable promise-based functionality, you need to add an event listener which listens for the abort
event and cancels the process from the event handler when the event is triggered.
Editor’s note: This article was updated by Chizaram Ken in March 2025 to include more comprehensive information on frontend and backend use cases for AbortController
in both Node.js and React.
AbortController
?JavaScript is a single-threaded programming language. Depending on the runtime environment, the JavaScript engine offloads asynchronous processes, such as making network requests, file system access, and other time-consuming jobs, to some APIs to achieve asynchrony.
\\nOrdinarily, we expect the result of an asynchronous operation to succeed or fail. However, the process can also take more time than anticipated, or you may no longer need the results when you receive them.
\\nTherefore, it is logical to terminate an asynchronous operation that has taken more time than it should or whose result you don’t need. However, doing so natively was a daunting challenge for a very long time.
\\n\\nAbortController
was introduced in Node v15.0.0 to abort certain asynchronous operations natively in Node.
AbortController
in Node.jsThe AbortController
API is a relatively new addition to Node. Therefore, a few asynchronous APIs support it at the moment. These APIs include the new Fetch API, timers, fs.readFile
, fs.writeFile
, http.request
, and https.request
.
We will learn how to use the AbortController API with some of the mentioned APIs. Because the APIs work with AbortController
in a similar way, we’ll only look at the Fetch and fs.readFile
API.
AbortController
with the Fetch APIHistorically, node-fetch
has been the de facto HTTP client for Node. With the introduction of the Fetch API in Node.js, however, that is about to change. Fetch is one of the native APIs whose behavior you can control with the AbortController
API.
As explained above, you pass the signal
property of the AbortController
instance to any abortable, promise-based API like Fetch. The example below illustrates how you can use it with the AbortController
API:
const url = \\"https://jsonplaceholder.typicode.com/todos/1\\";\\n\\nconst controller = new AbortController();\\nconst signal = controller.signal;\\n\\nconst fetchTodo = async () => {\\n try {\\n const response = await fetch(url, { signal });\\n const todo = await response.json();\\n console.log(todo);\\n } catch (error) {\\n if (error.name === \\"AbortError\\") {\\n console.log(\\"Operation timed out\\");\\n } else {\\n console.error(err);\\n }\\n }\\n};\\n\\nfetchTodo();\\n\\ncontroller.abort();\\n\\n
The trivial example above illustrates how to use the AbortController
API with the Fetch API in Node. However, in a real-world project, you don’t start an asynchronous operation and abort it immediately like in the code above.
It is also worth emphasizing that fetch
is still an experimental feature in Node. Its features might change in future versions.
AbortController
with fs.readFile
In the previous section, we looked at using AbortController
with the Fetch API. Similarly, you can use this API with the other cancelable APIs.
You can do this by passing the controller’s signal
property to the API’s respective function. The code below shows how to use AbortController
with fs.readFile
:
const fs = require(\\"node:fs\\");\\n\\nconst controller = new AbortController();\\nconst { signal } = controller;\\n\\nfs.readFile(\\"data.txt\\", { signal, encoding: \\"utf8\\" }, (error, data) => {\\n if (error) {\\n if (error.name === \\"AbortError\\") {\\n console.log(\\"Read file process aborted\\");\\n } else {\\n console.error(error);\\n }\\n return;\\n }\\n console.log(data);\\n});\\n\\ncontroller.abort();\\n\\n
Since the other cancelable APIs work similarly with AbortController
, we won’t cover them here.
AbortSignal
Each AbortController
class instance has a corresponding AbortSignal
class instance, accessible using the signal
property. However, AbortSignal
has functions such as the AbortSignal.timeout
static method that you can also use independent of AbortController
.
The AbortSignal
class extends the EventTarget
class and can receive the abort
event. Therefore, you can use the addEventListener
and removeEventListener
methods to add and remove listeners for the abort
event:
const controller = new AbortController();\\nconst { signal } = controller;\\n\\nsignal.addEventListener(\\n \\"abort\\",\\n () => {\\n console.log(\\"First event handler\\");\\n },\\n { once: true }\\n);\\nsignal.addEventListener(\\n \\"abort\\",\\n () => {\\n console.log(\\"Second event handler\\");\\n },\\n { once: true }\\n);\\n\\ncontroller.abort();\\n\\n
As in the above example, you can add as many event handlers as possible. Invoking the controller’s abort
method will trigger all the event listeners. Removing the abort
event listener after aborting the asynchronous process is standard practice to prevent memory leaks.
You can pass the optional third argument { once: true }
to addEventListener
as we did above instead of using removeEventListener
to remove the event listener. The optional third argument will ensure Node triggers the event listener once and removes it.
AbortSignal
to time out async operationsAs mentioned above, in addition to using it with AbortController
, the AbortSignal
class has some handy methods you might need. One of these methods is the AbortSignal.timeout
static method. As its name suggests, you can use it to abort cancelable asynchronous processes on timeout.
It takes the number of milliseconds as an argument and returns a signal you can use to timeout an abortable operation. The code below shows how you can implement it with the Fetch API:
\\nconst signal = AbortSignal.timeout(200);\\nconst url = \\"https://jsonplaceholder.typicode.com/todos/1\\";\\n\\nconst fetchTodo = async () => {\\n try {\\n const response = await fetch(url, { signal });\\n const todo = await response.json();\\n console.log(todo);\\n } catch (error) {\\n if (error.name === \\"AbortError\\") {\\n console.log(\\"Operation timed out\\");\\n } else {\\n console.error(err);\\n }\\n }\\n};\\n\\nfetchTodo();\\n\\n
You can use AbortSignal.timeout
similarly with the other abortable APIs.
AbortSignal
to cancel multiple operationsWhen you are working with more than one abort signal, you can combine them using the AbortSignal.any()
. This is really helpful when you need multiple ways to cancel the same operation.
In the example below, I will create two controllers,. The first one will be controlled by the user of the API, and the other will be used for internal timeout purposes. If either one aborts, the event listener gets removed:
\\n// Create two separate controllers for different concerns\\nconst userController = new AbortController();\\nconst timeoutController = new AbortController();\\n\\n// Set up a timeout that will abort after 5 seconds\\nsetTimeout(() => timeoutController.abort(), 5000);\\n\\n// Register an event listener that can be aborted by either signal\\ndocument.addEventListener(\'click\', handleUserClick, {\\n signal: AbortSignal.any([userController.signal, timeoutController.signal])\\n});\\n\\n
If any signal in the group aborts, the combined signal immediately aborts, ignoring any subsequent abort events. This provides you with a clean separation of concerns.
\\nAbortSignals
AbortSignals
allows you to easily stop a stream. For instance, if your intentions are to stop a stream because you got the value you are looking for, or you want to do something else entirely, you can just use AbortSignal
this way:
const abortController = new AbortController();\\nconst { signal } = abortController;\\n\\nconst uploadStream = new WritableStream({\\n /* implementation */\\n}, { signal });\\n\\n// To abort:\\nabortController.abort();\\n\\n
In the example above, the AbortController
creates a signal that, when passed to a WritableStream
constructor options, allows you to cancel the stream’s processing by calling abort()
on the controller.
AbortController
and AbortSignal
As highlighted in the previous section, several built-in asynchronous APIs support the AbortController
API. However, you can also implement a custom abortable promise-based API that uses AbortController
.
Like the built-in APIs, your API should take the signal
property of an AbortController
class instance as an argument as in the example below. It is standard practice for all APIs capable of using the AbortController
API:
const myAbortableApi = (options = {}) => {\\n const { signal } = options;\\n\\n if (signal?.aborted === true) {\\n throw new Error(signal.reason);\\n }\\n\\n const abortEventListener = () => {\\n // Abort API from here\\n };\\n if (signal) {\\n signal.addEventListener(\\"abort\\", abortEventListener, { once: true });\\n }\\n try {\\n // Run some asynchronous code\\n if (signal?.aborted === true) {\\n throw new Error(signal.reason);\\n }\\n // Run more asynchronous code\\n } finally {\\n if (signal) {\\n signal.removeEventListener(\\"abort\\", abortEventListener);\\n }\\n }\\n};\\n\\n
In the example above, we first checked whether the value of signal’s aborted
property is true
. If so, it means the controller’s abort
method has been invoked. Therefore, we throw an error.
Like mentioned in the previous sections, you can register the abort
event listener using the addEventListener
method. To prevent memory leaks, we are passing the { once: true }
option as the third argument to the addEventListener
method. It removes the event handler after handling the abort
event.
Similarly, we removed the event listener using the removeEventListener
in the finally
block to prevent memory leaks. If you don’t remove it, and the myAbortableApi
function runs successfully without aborting, the event listener you added will still be attached to the signal
even after exiting the function.
The AbortController
API is particularly useful to React developers, but in different ways.
When you want to use event listeners, you will need to add an addEventListener
, and then carefully remove each one with removeEventListener
in a cleanup function.
Although this works, it’s a bit tiring and prone to typographic error. Let us look into a real example that might look familiar.
\\nFor instance, you are building a dashboard that tracks mouse movements, listens for keyboard shortcuts, monitors scroll position, and responds to window resizing. Those are four different event listeners to manage.
\\nIn a normal scenario, you’d do this:
\\nuseEffect(() => {\\n // Define all your handler functions\\n const handleMouseMove = (e) => { /* update state */ };\\n const handleKeyPress = (e) => { /* update state */ };\\n const handleScroll = () => { /* update state */ };\\n const handleResize = () => { /* update state */ };\\n\\n // Add all the listeners\\n document.addEventListener(\'mousemove\', handleMouseMove);\\n document.addEventListener(\'keydown\', handleKeyPress);\\n window.addEventListener(\'scroll\', handleScroll);\\n window.addEventListener(\'resize\', handleResize);\\n\\n // Return a cleanup function that removes them all\\n return () => {\\n document.removeEventListener(\'mousemove\', handleMouseMove);\\n document.removeEventListener(\'keydown\', handleKeyPress);\\n window.removeEventListener(\'scroll\', handleScroll);\\n window.removeEventListener(\'resize\', handleResize);\\n };\\n}, []);\\n\\n
This works, but you will need to keep those function references around just so you can pass the same reference to both functions. Using AbortController
, it looks like this:
useEffect(() => {\\n const controller = new AbortController();\\n const { signal } = controller;\\n\\n // Define all your handler functions\\n const handleMouseMove = (e) => { /* update state */ };\\n const handleKeyPress = (e) => { /* update state */ };\\n const handleScroll = () => { /* update state */ };\\n const handleResize = () => { /* update state */ };\\n\\n // Add all the listeners with the signal\\n document.addEventListener(\'mousemove\', handleMouseMove, { signal });\\n document.addEventListener(\'keydown\', handleKeyPress, { signal });\\n window.addEventListener(\'scroll\', handleScroll, { signal });\\n window.addEventListener(\'resize\', handleResize, { signal });\\n\\n // Just one line for cleanup!\\n return () => controller.abort();\\n}, []);\\n\\n
You can clean everything up with just one line, no matter the number of event listeners available. This is cleaner and less prone to error.
\\nMost examples online you may find using AbortController
in React are mostly implemented within a useEffect()
. However, you don’t necessarily need a useEffect()
to use an AbortController
in React.
You can constantly use the AbortController
for fetch requests in React, as well.
Consider, for example, a project where I will need a search feature that would fire off API requests as the user types.
\\nThe problem will be that if the user types quickly, I’ll end up with multiple requests in flight. Sometimes, older requests will finish after newer ones, causing the results to jump around.
\\nUsing AbortController
in React can help solve this problem:
// Key implementation of AbortController for API requests in React\\nimport { useRef, useState } from \'react\';\\n\\n// Component with search functionality\\nconst SearchComponent = () => {\\n const controllerRef = useRef<AbortController>();\\n const [query, setQuery] = useState<string>();\\n const [results, setResults] = useState<Array<any> | undefined>();\\n\\n async function handleOnChange(e: React.SyntheticEvent) {\\n const target = e.target as typeof e.target & {\\n value: string;\\n };\\n\\n // Update the query state\\n setQuery(target.value);\\n setResults(undefined);\\n\\n // Cancel any previous in-flight request\\n if (controllerRef.current) {\\n controllerRef.current.abort();\\n }\\n\\n // Create a new controller for this request\\n controllerRef.current = new AbortController();\\n const signal = controllerRef.current.signal;\\n\\n try {\\n const response = await fetch(\'/api/search\', {\\n method: \'POST\',\\n body: JSON.stringify({\\n query: target.value\\n }),\\n signal\\n });\\n\\n const data = await response.json();\\n setResults(data.results);\\n } catch(e) {\\n // Silently catch aborted requests\\n // For production, you might want to check if error is an AbortError\\n }\\n }\\n\\n return (\\n <div>\\n <input type=\\"text\\" onChange={handleOnChange} />\\n {/* Results rendering */}\\n </div>\\n );\\n};\\n\\n
In the example above, we created a search function that cancels previous API requests when a user types something new. Using useRef
, we’re able to reference and track the current request and abort it each time the input changes.
This pattern will save you countless headaches. With your fingers crossed and a strong belief in AbortController
, you shouldn’t get outdated results showing up after newer ones.
Ordinarily, an asynchronous process may succeed, fail, or take longer than anticipated. Therefore, it is logical to cancel an asynchronous operation that has taken more time than it should or whose results you don’t need. The AbortController
API is a handy functionality for doing just that.
The AbortController
API is globally available; you don’t need to import it. An instance of the AbortController
class exposes the abort
method and the signal
property. The signal
property is an instance of the AbortSignal
class. Each AbortController
class instance has a corresponding AbortSignal
class instance, which you can access using the controller’s signal
property.
You pass the signal
property to a cancelable asynchronous API and invoke the controller’s abort
method to trigger the abort process. If the built-in APIs do not meet your use case, you can also implement a custom abortable API using AbortController
and AbortSignal
. However, follow the best practices hinted above to prevent memory leaks.
I’ll leave you with this; the beauty of the AbortController
API is that you can make virtually any asynchronous operation abortable, even those that don’t natively support cancellation.
Did I miss anything? Leave a comment in the comments section below.
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\n<identifier>
is not defined\\n Node.js developers often have to dynamically generate HTML documents from the server side and send them to connected web app users. For example, a generic MVC web app displays the logged user’s name within the header section by dynamically constructing the header section HTML within the Node.js server. We can generate HTML documents using native HTML syntax and JavaScript with simple string concatenation, JavaScript template literals, or string replacement-based templating logic. However, these approaches become time-consuming and complex as the project complexity grows.
\\nTemplate engines offer a fully featured, pre-developed, productive solution for generating HTML documents based on templates and provided data objects. EJS (Embedded JavaScript Templating) is a popular template engine that we can use to send dynamically generated HTML documents from Node.js apps.
\\nIn this tutorial, we’ll learn EJS templating syntax, basic examples, and how to properly set up and use EJS templating in your Node.js apps. We’ll also explore advanced EJS templating techniques, best practices, and common development pitfalls, and compare EJS with other popular templating engines.
\\nEditor’s note — This blog was updated on 11 March, 2025, many thanks to the contributions of Shalitha Suranga. The revised post improves clarity with a more comprehensive introduction to EJS, detailed installation instructions, and expanded code examples. It also includes a comparison with other templating engines, a troubleshooting section, and answers to common reader questions. These updates make it an even more practical guide for developers working with EJS.
\\nA template engine dynamically generates content based on two primary inputs:
\\nUnderstand the underlying process of a template engine using the following diagram:
\\nWeb template engines, like EJS, handle the task of interpolating data into HTML code while providing some features like repetitive blocks, nested blocks, shared templates that would have been difficult to implement by concatenating strings, or similar built-in JavaScript features.
\\nEJS is a web template engine that lets you generate HTML documents by embedding JavaScript code snippets within HTML code. EJS takes an HTML template written using EJS syntax and data as input parameters and renders the final HTML document by injecting data based on template logic.
\\nThe EJS web template engine comes with the following highlighted features:
\\n.ejs
template files. The EJS engine handles .ejs
files like PHP handles .php
files to generate HTML documents dynamicallyEJS is a popular web template engine that most Node.js developers integrate with their favorite web frameworks to build complex, MVC, server-rendered web apps.
\\nWe typically use EJS with web frameworks like Express and Fastify, but it’s possible to use EJS for templating requirements without a specific web framework. Let’s understand how EJS works by using it without a web framework.
\\nCreate a new Node.js project by using NPM or Yarn as follows:
\\nnpm init -y\\n# --- or ---\\nyarn init -y\\n\\n
Install EJS to the project:
\\nnpm install ejs\\n# --- or ---\\nyarn add ejs\\n\\n
Add the following code snippet to the main.js
file:
const ejs = require(\'ejs\');\\n\\nconst template = `\\n <h2>Hello <%= name %>!</h2>\\n <p>Today is <%= date %></p>\\n <p>1 + 2 is <%= 1 + 2 %></p>\\n`;\\nconst data = {\\n name: \'John\',\\n date: new Date().toISOString().split(\'T\')[0]\\n};\\n\\nconst output = ejs.render(template, data);\\n\\nconsole.log(output);\\n\\n
The render(template, data)
method generates output based on the input template string and data. Here, the template uses the <%= %>
tag to render each data object property and we passed the required data properties from the second parameter of the render()
method. EJS uses JavaScript as the templating language so expressions like 1 + 2
directly get executed.
When you run the main.js
file with Node.js, you’ll see the generated output on the terminal, as shown in the following preview:
You can also store the template content within a file named template.ejs
and use the renderFile()
method as follows:
ejs.renderFile(\'template.ejs\', data)\\n .then((output) => console.log(output));\\n\\n
We’ll learn how to use .ejs
files in a web framework in an upcoming section soon!
You have just seen the basic syntax of EJS. The syntax used the following pattern and follows basic HTML-like syntax:
\\n<startingTag JavaScript expression closingTag>\\n\\n
For example, previously we used the <%= name %>
block to render the value of the name
data element. EJS has different tags for different purposes. The start tag <%=
is called the “HTML-escaped output” tag because if the string in the content has forbidden characters like >
and &
, the characters will be escaped (replaced by HTML codes) in the output string.
EJS supports the following tags:
\\nSyntax | \\nDescription | \\nExample | \\n
<% expression %> | \\nScriplet tag, produces no output and is used for the control flow | \\n<% if(isLogin) { %> | \\n
<%_ expression %> | \\nScriplet tag that strips all previous whitespaces | \\n <%_ if(isLogin) { %> | \\n
<%= expression %> | \\nOutputs HTML-escaped data | \\n<%= name %> | \\n
<%- expression %> | \\nOutputs HTML-unscaped data | \\n<%- htmlString %> | \\n
<%# comment %> | \\nCommenting tag | \\n<%# This is a comment %> | \\n
<%% | \\nOutputs the <% literal | \\n<%% | \\n
Earlier, we wrote a sample templating example without using a web framework to simplify the introduction of this tutorial. Using a Node.js web framework undoubtedly boosts developer productivity and fastens feature delivery for building Node.js web apps, so let’s focus on using EJS under a web framework from now on.
\\nWe will use Express in this tutorial because it’s one of the best Node frameworks. It’s minimalistic and easy to get started with.
\\nLet’s start a project from scratch. Create a new directory where you want to put the project files.
\\nInitialize a new Node.js project in the directory by running npm init -y
or yarn init -y
in the terminal, then to install Express and EJS, run:
npm install express ejs\\n# --- or ---\\nyarn add express ejs\\n\\n
After installation, create a app.js
file and a views
directory in the root project directory. Inside the views
directory, create two directories — pages
and partials
. I will be explaining why we need these directories shortly.
First, copy the following into app.js
:
const express = require(\'express\');\\nconst app = express();\\nconst port = 3000;\\n\\napp.set(\'view engine\', \'ejs\');\\n\\napp.get(\'/\', (req, res) => {\\n res.render(\'pages/index\');\\n});\\n\\napp.listen(port, () => {\\n console.log(`App listening at port ${port}`);\\n});\\n\\n
Now, inside the views/pages
directory, create a file called index.ejs
. And the following into index.ejs
:
<h1>Hi, there!</h1>\\n\\n
If you run node app.js
on the terminal from the project directory, and then visit http://localhost:3000
, you should see the following result:
Now, let’s walk through some parts of the code and understand what is going on:
\\napp.set(\'view engine\', \'ejs\')
is self-explanatory. We are setting EJS as the Express app view engine. By default, Express will look inside of a views
directory when resolving the template files, which is why we had to create a views
directoryres.render(\'pages/index\')
, we are calling the render()
method on the response object. This renders the view provided (pages/index
in this case) and sends back the rendered HTML string to the clientapp.set(\'view engine\', \'ejs\')
. We also didn’t have to write the path as views/pages/index
because the views
directory is used by defaultRecall that our aim is to combine data with templates. We can do that by passing a second argument to res.render()
. This second argument must be an object, whose properties will be accessible in the EJS template file.
Update app.js
like so:
const express = require(\'express\');\\nconst app = express();\\nconst port = 3000;\\n\\napp.set(\'view engine\', \'ejs\');\\n\\nconst user = {\\n firstName: \'John\',\\n lastName: \'Doe\'\\n};\\n\\napp.get(\'/\', (req, res) => {\\n res.render(\'pages/index\', { user });\\n});\\n\\napp.listen(port, () => {\\n console.log(`App listening at port ${port}`);\\n});\\n\\n
The above GET
endpoint renders the index.ejs
template by passing the user details object via the user
property name, so we can now use the user
object identifier within the template to access available properties of the user
details object.
Update index.ejs
too as follows:
<h1>Hi, <%= user.firstName %>!</h1>\\n\\n
Run node app.js
and you should get this:
The EJS scriptlet tag, <% %>
can contain view layer logic to render HTML content dynamically based on the provided data elements. Any JavaScript syntax can be used in this tag. You can use JavaScript if
statements to render HTML segments conditionally.
To see this in action, update the user details object in app.js
as follows:
const user = {\\n firstName: \'John\',\\n lastName: \'Doe\',\\n isAdmin: true\\n};\\n\\n
Then update index.js
:
<h1>Hi, <%= user.firstName %>!</h1>\\n<% if (user.isAdmin) { %>\\n <div style=\\"background: #ddd; padding: 0.5em\\">You are an administrator</div>\\n<% } %>\\n\\n
If you run the app, you will see the paragraph in the if
statement displayed, as shown in the following preview:
Change isAdmin: false
in the user details object, and the HTML block won’t be displayed.
Take note of the syntax of the scriptlet <% if(user.isAdmin) { %>
. The opening {
is added within the second scriptlet block and the closing }
is added in the next scriptlet block. EJS scriptlet tags work the same as PHP tags.
Because the <% %>
scriptlet tag can contain any valid JavaScript code, we can easily loop through and display data in EJS using JavaScript loop structures. You can use any preferred JavaScript loop structure with EJS by wrapping repetitive segments with a scriptlet block.
Create a new GET
endpoint named /articles
and pass a list of sample articles into its template by adding the following code snippet to the app.js
:
const articles = [\\n {id: 1, title: \'Lorem ipsum dolor sit amet\', body: \'Lorem ipsum dolor sit amet, consectetur adipiscing elit.\'},\\n {id: 2, title: \'Nam blandit pretium neque\', body: \'Lorem ipsum dolor sit amet, consectetur adipiscing elit.\'},\\n {id: 3, title: \'Phasellus auctor convallis purus\', body: \'Lorem ipsum dolor sit amet, consectetur adipiscing elit.\'}\\n];\\n\\napp.get(\'/articles\', (req, res) => {\\n res.render(\'pages/articles\', { articles });\\n});\\n\\n
Create a new file inside the views/pages
named articles.ejs
and add the following code:
<!DOCTYPE html>\\n<html lang=\\"en\\">\\n<head>\\n <meta charset=\\"UTF-8\\">\\n <meta http-equiv=\\"X-UA-Compatible\\" content=\\"IE=edge\\">\\n <meta name=\\"viewport\\" content=\\"width=device-width, initial-scale=1.0\\">\\n <title>Articles</title>\\n</head>\\n<body>\\n <ul>\\n <% for(const article of articles) { %>\\n <li>\\n <h2><%= article.title %></h2>\\n <p><%= article.body %></p>\\n </li>\\n <hr />\\n <% } %>\\n </ul>\\n</body>\\n</html>\\n\\n
Here we used the for...of
JavaScript loop structure, but you can use the forEach()
array method also based on your development preferences:
<% articles.forEach((article)=> { %>\\n <li>\\n <h2><%= article.title %></h2>\\n <p><%= article.body %></p>\\n </li>\\n <hr />\\n<% }) %>\\n\\n
When you run the app, visit http://localhost:3000/articles
and you should see the following:
Notice the following implementation facts:
\\narticles
which is an array of article objects containing a title
and a body
to the articles.ejs
template. Then, in the template, we loop through the array using for..of
( or forEach()
) to render each post object as an HTML list item<li></li>
block since the parent scriptlet tag has a loop structurearticle
variable that references each item of the array on each iteration of the loop ( <% for(const article of articles) { %>
) is accessible in the nested block of the template code until we reach the closing brackets, <% } %>
Try to use other JavaScript loop structures to render this article list.
\\nSome parts of websites stay the same across different pages, like the header, footer, and sidebar. If we repetitively add these parts in each page template, your project becomes hard to maintain since you’ll have to edit multiple templates to edit something in a common frontend section, i.e., adding a new link to the website’s primary navigation bar. EJS lets you create shared templates and import them with the include(file)
inbuilt function.
Recall that we created the views/partials
directory earlier. Create two new files named header.ejs
and footer.ejs
in this folder.
The content of header.ejs
should be the following:
<!DOCTYPE html>\\n<html lang=\\"en\\">\\n<head>\\n <meta charset=\\"UTF-8\\">\\n <meta http-equiv=\\"X-UA-Compatible\\" content=\\"IE=edge\\">\\n <meta name=\\"viewport\\" content=\\"width=device-width, initial-scale=1.0\\">\\n <link \\n rel=\\"stylesheet\\" \\n href=\\"https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css\\" \\n integrity=\\"sha384-QWTKZyjpPEjISv5WaRU9OFeRpok6YctnYmDr5pNlyT2bRjXh0JMhjY6hW+ALEwIH\\" crossorigin=\\"anonymous\\">\\n <title>Articles</title>\\n</head>\\n<body>\\n<nav class=\\"navbar navbar-expand-lg navbar-dark bg-dark\\">\\n <div class=\\"container-fluid\\">\\n <a class=\\"navbar-brand\\" href=\\"/\\">SampleBlog</a>\\n <div class=\\"collapse navbar-collapse\\" id=\\"navbarNav\\">\\n <ul class=\\"navbar-nav mr-auto\\">\\n <li class=\\"nav-item\\">\\n <a class=\\"nav-link\\" href=\\"/\\">Home</a>\\n </li>\\n <li class=\\"nav-item\\">\\n <a class=\\"nav-link\\" href=\\"/articles\\">Articles</a>\\n </li>\\n <li class=\\"nav-item\\">\\n <a class=\\"nav-link\\" href=\\"#\\">About</a>\\n </li>\\n </ul>\\n </div>\\n </div>\\n</nav>\\n\\n
We have included a link to Bootstrap in header.ejs
because we will be using Bootstrap classes to style the sample project.
Now, update footer.ejs
like so:
<footer class=\\"p-3\\">\\n <p class=\\"text-muted\\">© <%= new Date().getFullYear() %> Simple Blog</p>\\n</footer>\\n</body>\\n</html>\\n\\n
And the following code to the articles.ejs
file:
<%- include(\'../partials/header\') %>\\n<main class=\\"container py-5\\">\\n <h1>Articles</h1>\\n <ul class=\\"pt-4\\">\\n <% for(const article of articles) { %>\\n <li>\\n <h3><%= article.title %></h3>\\n <p><%= article.body %></p>\\n </li>\\n <hr />\\n <% } %>\\n </ul>\\n</main>\\n<%- include(\'../partials/footer\') %>\\n\\n
Note the following implementation facts:
\\nheader.ejs
and footer.ejs
partials using the include()
function that takes the relative path to the file as an argument. Because pages
and partials
are in the same directory, to access partials
from pages
, we have to first go out of the pages
directory using the template file path as ../partials/header
<%- %>
) instead of the escaped output tag since we needed to render the HTML code of the shared template directly. Make sure not to use the HTML-unescaped output tag with untrusted user inputs, because it can expose your application to script injection attacksRun node app.js
, visit http://localhost:3000/articles
and you should see this:
Now we can reuse these EJS partials on other pages and avoid writing repetitive code segments. Include the partials within the index.ejs
file as follows:
<%- include(\'../partials/header\') %>\\n<main class=\\"container py-5\\">\\n <h1>Hi, I am <%= user.firstName %> <%= user.lastName %></h1>\\n <h3>Welcome to my blog</h3>\\n</main>\\n<%- include(\'../partials/footer\') %>\\n\\n
Click on the “Home” link. You’ll see the homepage with the same header and footer we’ve used for the articles page:
\\nNote that we can use any JavaScript operator in the EJS tags so that we can write this instead:
\\n... \\n<h1>Hi, I am <%= user.firstName + \' \' + user.lastName %> \\n...\\n\\n
Something is wrong on the index page. Can you see it?
\\nThe title of the homepage is “Articles,” because the header.ejs
partial has the title of the web page hard coded as such, which is not desirable. We want the title of the page to reflect the content of the page, so we must pass in the title as an argument.
EJS makes it easy because a partial has access to every variable in the parent view, so we just have to pass the variable in the object alongside the call to res.render()
.
Update the call to res.render()
in app.js
as follows:
//...\\napp.get(\'/\', (req, res) => {\\n res.render(\'pages/index\', {\\n user,\\n title: \'Home\'\\n });\\n});\\n\\napp.get(\'/articles\', (req, res) => {\\n res.render(\'pages/articles\', {\\n articles,\\n title: \'Articles\'\\n });\\n});\\n//...\\n\\n
Then update the title tag in header.ejs
:
...\\n<title><%= title %></title>\\n...\\n\\n
Run the app again and each page should have the correct title:
\\nYou can also pass a variable to a partial when you include it as follows:
\\n<%- include(\'../partials/header\', { title :\'Page Title\' }) %>\\n\\n
Variables passed this way precede variables passed through Express’s render()
function.
I intentionally didn’t implement the About page. Create the About page by passing some data to the about.ejs
to be more familiar with EJS partials and data passing.
EJS uses JavaScript as the templating language, so it directly throws JavaScript exceptions and displays them within the terminal and website frontend. Here are some common EJS templating errors and how to resolve them:
\\n<identifier>
is not definedEJS throws a ReferenceError
when you try to use a data property that is not provided by the render()
function. This error can be fixed by sending the data element you used in the template or by checking whether the identifier is available as follows:
<p>Visitors: <%= typeof visitors != \'undefined\' ? visitors : 0 %></p>\\n\\n
EJS throws a TypeError
if we try to read the properties of undefined identifiers. For example, if we try to access user.info.email
within the template but the user
object doesn’t contain the info
nested object, EJS throws this error.
Using JavaScript’s optional chaining operator is a popular way to solve these issues:
\\n<p><%= user?.info?.email %></p>\\n\\n
Most developers know the possibility of printing HTML codes on the browser viewport by escaping HTML-specific characters. EJS renders HTML-escaped outputs with the <%= %>
tag, so if we use it to include a template, the raw code of the included template gets HTML-escaped, rendering visible HTML code on the browser viewport.
To solve this, check whether you use include()
as follows:
<%= include(\'../partials/header\') %>\\n\\n
Replace =
with -
to render HTML-unescaped output to send raw HTML to the browser properly:
<%- include(\'../partials/header\') %>\\n\\n
SyntaxError
is a general error type that JavaScript interpreters use to report language syntax issues, so EJS throws it for JavaScript-related syntax issues. Fixing JavaScript syntax issues is the only way to solve this issue. For example, the above error is thrown due to a missing curly brace of an if
statement, so closing the if
block properly resolves this issue:
<% if(user.isAdmin) { %>\\n <p>You are not an administrator</p>\\n<% } %>\\n\\n
You can use the ejs-lint NPM package or EJS code editor plugins to detect EJS syntax issues during development.
\\nAdhering to development best practices is the most practical way to create high-quality, maintainable, and beginner-friendly codebases. Consider adhering to the following best practices while developing EJS templates:
\\n<% if(user.isAdmin) { %>...<% } %>
is better than using so many <% user.isAdmin ? .. : .. %>
blocks<%- %>
tag with user inputs. Sanitize HTML strings properly if you need to render user input data as raw HTMLApart from these EJS-specific best practices, write clean, readable, and error-free JavaScript code to improve the quality of JavaScript expressions in EJS templates.
\\nEJS has two main competitors: Pug and Handlebars. Both these EJS alternatives competitively offer features that EJS implements. Let’s check how EJS tries to offer a better templating solution by comparing it with Pug and Handlebars:
\\nComparison factor | \\nEJS | \\nPug | \\nHandlebars | \\n
Document structuring method | \\nNative HTML tags | \\nPug document syntax | \\nNative HTML tags | \\n
Templating language | \\nJavaScript | \\nPug language and JavaScript | \\nHandlerbars language, extra features should be added via helpers | \\n
Templating language complexity | \\nMinimal | \\nMinimal | \\nModerate (uses some unique syntax like Bash scripting does) | \\n
Beginner-friendliness | \\nBeginner-friendly since it uses simple tags, HTML, and JavaScript | \\nCan be challenging for new developers since structuring language is not HTML | \\nBeginner-friendly than Pug, but limited features might frustrate beginners | \\n
Web framework support | \\nGood | \\nGood | \\nGood | \\n
Partials supported? | \\nYes | \\nYes | \\nYes | \\n
Template composition supported? (extending template blocks in a base template) | \\nNo | \\nYes | \\nYes | \\n
Complete ports in other languages (i.e., Go, Java, etc) | \\nNot available since it depends on JavaScript | \\nAvailable | \\nMustache (the base language of Handlebars) implementations are available | \\n
EJS doesn’t implement template composition, but developers can use partials to decompose complex apps into reusable parts using the same technique that PHP developers use. Overall, EJS offers a simple and powerful templating solution by integrating native JavaScript with HTML using ASP-like tags.
\\nIn this article, we explored template engines, introduced EJS for Node.js app templating, and learned how to use it. We have seen how to reuse code with partials and how we can also pass data to them. Adhere to EJS templating best practices discussed above to write highly readable, maintainable, and beginner-friendly template source codes.
\\nHere is the EJS syntax reference if you want to learn more about what’s possible with EJS. You can check out the complete code for this article from this GitHub repository.
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nOverlays are visual effects that add a layer over content, often enhancing design, readability, or interactivity. Image overlays add a semi-transparent layer over images to improve text visibility, emphasize details, or enable hover effects.
\\nThis guide begins with the basics of applying image overlays in CSS and gradually explores more interactive techniques like hover effects and animations.
\\nEditor’s note: This blog was last updated by Ibadehin Mojeed in March 2025 to provide more concise, streamlined information on CSS image overlays.
\\nA common way to create an overlay effect in CSS is by using pseudo-elements (::before
or ::after
) or an additional <div>
wrapper.
Let’s explore both.
\\n::before
pseudo-elementTo create an overlay using the ::before
pseudo-element, wrap the image inside a container:
<div class=\\"image-wrapper\\">\\n <img\\n src=\\"https://images.unsplash.com/photo-1609220136736-443140cffec6?q=80&w=800&h=600&auto=format&fit=crop\\"\\n alt=\\"Sample Image\\"\\n width=\\"800\\"\\n height=\\"600\\"\\n />\\n <div class=\\"overlay-text\\">\\n The Pros and Cons of Buying vs. Renting a Home\\n </div>\\n</div>\\n\\n
Next, we apply the overlay using ::before
on the .image-wrapper
container. The pseudo-element is positioned absolutely and given a semi-transparent background:
.image-wrapper::before {\\n content: \\"\\";\\n position: absolute;\\n top: 0;\\n left: 0;\\n width: 100%;\\n height: 100%;\\n background: rgba(0, 0, 0, 0.3);\\n}\\n\\n
Since ::before
is absolutely positioned, we set the .image-wrapper
to position: relative;
to ensure proper placement:
.image-wrapper {\\n position: relative;\\n /* Other styles */\\n}\\n\\n
Check out the live demo on CodePen:
\\nSee the Pen
\\nimage overlay ::before pseudo by Ibaslogic (@ibaslogic)
\\non CodePen.
You can customize the overlay color and transparency by adjusting the rgba
values in the CSS.
Even though the ::before
pseudo-element is applied to .image-wrapper
, the text inside remains visible. This happens because .overlay-text
is also positioned absolutely within .image-wrapper
, placing it in the same positioning context as the overlay.
In the stacking order, elements are layered based on their order in the HTML. Since .overlay-text
appears after ::before
in the DOM, it naturally sits on top of the overlay, ensuring the text remains readable:
We can explicitly control the stacking order using the z-index
. Assigning a higher z-index
value to an element ensures it remains above others, while a lower z-index
keeps it beneath.
<div>
overlayInstead of using ::before
, we can add a <div>
element specifically for the overlay inside the .image-wrapper
container:
<div class=\\"image-wrapper\\">\\n <img\\n src=\\"https://images.unsplash.com/photo-1609220136736-443140cffec6?q=80&w=800&h=600&auto=format&fit=crop\\"\\n alt=\\"Sample Image\\"\\n width=\\"800\\"\\n height=\\"600\\"\\n />\\n <div class=\\"overlay\\"></div>\\n <div class=\\"overlay-text\\">\\n The Pros and Cons of Buying vs. Renting a Home\\n </div>\\n</div>\\n\\n
The .overlay
<div>
is styled similarly to the ::before
pseudo-element, with absolute positioning and a semi-transparent background:
.overlay {\\n position: absolute;\\n top: 0;\\n left: 0;\\n width: 100%;\\n height: 100%;\\n background: rgba(0, 0, 0, 0.3);\\n}\\n\\n
You can see this example in action on CodePen:
\\nSee the Pen
\\nimage overlay <div> overlay by Ibaslogic (@ibaslogic)
\\non CodePen.
In this setup, .overlay-text
naturally appears above .overlay
due to its placement later in the HTML. However, we can explicitly control the stacking order using z-index
.
There are times when layering one image over another is necessary, whether for watermarking, branding, or displaying previews and thumbnails. One way to achieve this is by placing two images inside a wrapper, then positioning the top layer absolutely.
\\nPlace the images inside a wrapper like so:
\\n<div class=\\"image-wrapper\\">\\n <img\\n src=\\"https://images.unsplash.com/photo-1609220136736-443140cffec6?q=80&w=800&h=600&auto=format&fit=crop\\"\\n alt=\\"Background Image\\"\\n class=\\"background-image\\"\\n />\\n <img\\n src=\\"https://images.unsplash.com/photo-1697229299093-c920ab53bfb1?q=80&w=800&h=600&auto=format&fit=crop\\"\\n alt=\\"Overlay Image\\"\\n class=\\"overlay-image\\"\\n />\\n</div>\\n\\n
Then, go ahead and position the top layer absolutely:
\\n/* Other styles */\\n.overlay-image {\\n position: absolute;\\n top: 50%;\\n left: 50%;\\n transform: translate(-50%, -50%);\\n width: 50%;\\n border: 2px solid white;\\n opacity: 0.7;\\n}\\n\\n
You can see this example in action on CodePen:
\\nSee the Pen
\\nimage layer in CSS by Ibaslogic (@ibaslogic)
\\non CodePen.
You can adjust the size and position of the overlay image to suit your design.
\\nNow that you’ve learned how to create basic overlays and layer images, let’s explore ways to make them more interactive with hover effects and animations.
\\nOne simple approach is adjusting the overlay’s opacity when the user hovers over the image:
\\n.overlay {\\n /* Other styles */\\n transition: background 0.3s ease-in-out;\\n}\\n.image-wrapper:hover .overlay {\\n background: rgba(0, 0, 0, 0.6);\\n}\\n\\n
This creates a subtle effect where the overlay darkens on hover, enhancing visual feedback. Try hovering over the image in the CodePen below to see it in action:
\\nSee the Pen
\\nimage hover overlay div overlay by Ibaslogic (@ibaslogic)
\\non CodePen.
Another common technique is layering images so that a different picture appears when hovering over a product. This is widely used on ecommerce websites to showcase product variations dynamically.
\\nTo achieve this, place two images inside a wrapper:
\\n<div class=\\"product-image\\">\\n <img\\n src=\\"https://images.unsplash.com/photo-1676291055501-286c48bb186f?w=900&auto=format&fit=crop&q=60\\"\\n alt=\\"Product Front\\"\\n class=\\"default-image\\"\\n />\\n <img\\n src=\\"https://images.unsplash.com/photo-1676303679145-8679f5ceeb16?w=900&auto=format&fit=crop&q=60\\"\\n alt=\\"Product Hover\\"\\n class=\\"hover-image\\"\\n />\\n</div>\\n\\n
The .hover-image
will be positioned absolutely and hidden by default with opacity: 0
:
/* Other styles */\\n.hover-image {\\n position: absolute;\\n inset: 0;\\n opacity: 0;\\n transition: opacity 0.5s ease, transform 2s cubic-bezier(0, 0, 0.44, 1.18);\\n}\\n.product-image:hover .hover-image {\\n opacity: 1;\\n transform: scale(1.12);\\n}\\n.product-image:hover .default-image {\\n opacity: 0;\\n}\\n\\n
On hover, it smoothly fades in and scales up while the .default-image
fades out, creating an engaging transition effect ideal for product previews.
Try hovering over the image in the CodePen below to see it in action:
\\nSee the Pen
\\nimage layer hover by Ibaslogic (@ibaslogic)
\\non CodePen.
Adding an overlay to a background image enhances the visual appeal of hero sections and banners. This effect can be achieved using various methods, including pseudo-elements, extra <div>
elements, CSS properties like background-image
, background-blend-mode
, and mix-blend-mode
, or even a trick with border-image
.
In this section, we’ll explore two straightforward methods: one using a pseudo-element and another leveraging linear-gradient()
with a background-image
.
Similar to how we applied the ::before
pseudo-element to the <img>
container earlier, we can also use it to create an overlay on top of a background image.
To apply an overlay to a background image, use the following HTML structure:
\\n<section class=\\"hero-section\\">\\n <div class=\\"content\\">\\n <h1>Hero title</h1>\\n <p>Hero description here</p>\\n </div>\\n</section>\\n\\n
In the CSS, we apply the background image to .hero-section
and use ::before
to create an overlay:
.hero-section {\\n position: relative;\\n background-image:\\n url(\\"https://images.unsplash.com/photo-1609220136736-443140cffec6?q=80&w=800&h=600&auto=format&fit=crop\\");\\n background-size: cover;\\n background-position: center;\\n color: white;\\n}\\n.hero-section::before {\\n content: \\"\\";\\n position: absolute;\\n inset: 0;\\n background: rgba(0, 0, 0, 0.5);\\n}\\n/* Other styles */\\n\\n
The overlay spans the entire .hero-section
, covering all its contents, including .content
. To ensure the text remains visible above the overlay, we will apply z-index: 1
to .content
and set its position to relative
, allowing the z-index
to take effect:
.content {\\n /* ... */\\n position: relative;\\n z-index: 1;\\n}\\n\\n
See this example in action on CodePen below:
\\nSee the Pen
\\nCSS ::before pseudo-element overlay your background images by Ibaslogic (@ibaslogic)
\\non CodePen.
With the same HTML structure as before, we can use CSS to add a linear gradient overlay directly on top of the background image:
\\n/* Other styles */\\n.hero-section {\\n background-image: linear-gradient(rgba(0, 0, 139, 0.5), rgba(139, 0, 0, 0.5)),\\n url(\\"https://images.unsplash.com/photo-1609220136736-443140cffec6?q=80&w=800&h=600&auto=format&fit=crop\\");\\n background-size: cover;\\n background-position: center;\\n color: white;\\n}\\n\\n
The linear-gradient()
function overlays a gradient on top of the background image, transitioning from dark blue to dark red with 50 percent opacity. This effect helps improve text readability while adding a stylish effect.
See this example in action on CodePen below:
\\nSee the Pen
\\nCSS linear gradient overlay your background images by Ibaslogic (@ibaslogic)
\\non CodePen.
Image overlays enhance visuals, improve readability, and add interactivity. This guide covered key techniques such as the ::before
pseudo-element, linear-gradient()
for gradient overlays, and interactive hover effects. Mastering these methods allows you to create stunning hero sections, banners, and dynamic product previews. You can experiment with colors, opacities, and animations to tailor overlays to your needs.
If you found this guide helpful, feel free to share it. Also, let us know in the comments which overlay technique is your favorite.
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\ngit branch -r
and git branch -a
\\n git fetch
?\\n git pull
vs. git fetch
\\n git checkout -b
\\n checkout <remote name>/<remote branch name>
\\n Git is a powerful distributed version control system that not only preserves the history of your code but also facilitates collaboration for development teams.
\\nBranches are at the heart of Git’s workflow, making it easy to experiment, fix bugs, and develop new features without messing with your main codebase. You can create, delete, compare, merge, rebase, publish, and track changes — all without breaking a sweat.
\\nWhether you’re working independently or as part of a team, understanding how to use branches effectively will elevate your Git workflow. Let’s dive in.
\\nA remote branch is a reference to a specific commit that exists on a remote repository, such as GitHub, GitLab, or Bitbucket. Remotes allow developers to collaborate effectively by keeping track of changes made by others and ensuring everyone stays in sync.
\\nRemote branches are essential for sharing code, reviewing updates, and maintaining a smooth workflow in team projects. They help developers stay aligned with the latest changes while providing a structured way to contribute without interfering with the main branch.
\\nAt this point, you might be wondering, how do you switch between branches? We’ll cover checking out a remote branch in detail later in this guide. For now, just know that this process plays a key role in making remote branches such a powerful collaboration tool.
\\nBefore working with remote branches in Git, let’s discuss a few prerequisites:
\\nMake sure that you have Git installed on your local machine. If you are unsure, verify the status with the following command:
\\ngit -v or Git –version\\n\\n
Whether you’re working solo or in a collaborative environment, ensure you have a local copy of the remote repository. Run the following command to clone said repository:
\\ngit clone <repository URL>\\n\\n
Familiarity with basic Git commands like pull, status, merge, etc., makes managing remote branches less daunting.
\\nKnowing the available remote branches is good practice, especially in a collaborative environment. While this is possible with the graphical user interface (GUI), working in the command line is the best practice — not to mention that it’s faster. This section will cover different ways to view and manage remote branches effectively.
\\nIf you are going solo or in a collaborative environment and you want to view or list out all the branches that exist on the remote repository, you can use the following command:
\\ngit branch -r\\n\\n
The -r
flag ensures that the list only contains remote branches. Below is an example of what it would return:
origin/HEAD → origin/main
— The main branch is the default branchorigin/develop
— A develop branch that exists remotelyorigin/feature/auth
— A feature/auth branch that exists on the remote repositoryorigin/feature/post-ad
— A feature/post-ad branch that exists remotelyorigin/main
— A main branch that exists remotelyIt is important to note that a remote repository can have from one to any number of branches, depending on the project.
\\nTo list out all the local and remote branches, use the following command:
\\ngit branch -a\\n\\n
This is an example of what the command will return:
\\ngit branch -r
and git branch -a
The git branch -r
command lists out only all the remote branches.
git branch -a
, on the other hand, also lists out branches but combines both local and remote branches.
Git allows for comprehensive information about a specific remote repository, providing details like the remote repository’s branches, use, and many more. This is achieved using the following command:
\\ngit remote show origin\\n\\n
Your output will be:
\\nLet’s discuss some of these elements:
\\nFetch
and pull URLs — Define where Git pulls from and pushes toHEAD branch
— Shows that the main branch is the default branch of the remote repositorygit pull
— Define which local branches are tracking the remote branchesgit push
— Define which local branches are set up for pushing changes to the remote repositorySometimes, there are just too many branches. You might want to filter out the ones of interest so you don’t have to mindlessly wade through the entire list. To filter remote branches, you can use the following command:
\\ngit branch -r |grep feature\\n\\n
This lists out all the remote branches that have feature
in them. The output looks like this:
In Git, developers have a fair amount of freedom in how they name branches. If the project is collaborative, naming conventions are typically specific to the team.
\\nStill, Git enforces a small set of naming rules that must be abided by when naming branches. The rules are as follows:
\\n.
at the beginning of the branch path component (e.g., origin/feature/.new-feature
).lock
since it is a reserved extension..
(e.g., origin/feature/new..feature
)@{
If you break any of these rules while naming a branch, Git will refuse to create the branch and throw up an error telling you exactly what’s wrong. You can also check if a name is fit for a branch name by running the following command:
\\ngit check-ref-format –branch <branch name>\\n\\n
If the branch name is valid, that command will return the name. If the name violates any of the Git branch naming rules above, it returns an error message.
\\nFetching is how Git downloads all of the changes committed to a remote repository. Fetching also allows you to review them before merging into your local branches.
\\ngit fetch
?When a remote repository is cloned from a remote server to a local repository, all the remote branches are not mapped to local branches on the local repository by default. This happens no matter how many branches there are inside.
\\ngit fetch
is a Git command that helps download the changes committed to a remote repo. It allows you to review the changes before you make the decision to merge the revisions into the local branches. This command updates the remote-tracking branches, ensuring that you can work on the most recent version of the remote repository.
Start with the command:
\\ngit fetch origin develop\\n\\n
This will generate the following:
\\nFetching is not exclusive to only one remote repository; there are also cases where there are multiple remote repos. If that’s the case, use this command:
\\ngit fetch - - all\\n\\n
Fetching before checking out is critical. The process guarantees that you have the latest information on your remote branches. Let’s take, for instance, a case where a features/auth branch is created locally when it already exists remotely. In this case, there is no way local Git will know. This means a duplicate branch is created locally and does not keep track of the remote branch.
\\nIf you run git pull
, there will be conflicts. To avoid that unnecessary headache, git fetch
first, then git checkout -b <branch name>
git pull
vs. git fetch
Under the hood, we have git pull
as a shortcut for git fetch
, followed by git merge
. So, git fetch
downloads all the commits or changes from the remote repository into the local repository. This is Git’s internal database (.git folder) where changes are stored and managed but not placed into the working tree. The working tree shows the current state of your files that are actively edited on your local machine.
For changes to reflect on the working tree, you can then run git merge
. You won’t see any visible effect on the working tree without this last command.
The git checkout
command tells Git which branch should receive new commits and changes. A branch is simply a pointer to a specific commit, and a commit represents a snapshot of your repository at a given point in time.
There are several ways to check out a branch. In this section, we’ll explore different methods for checking out a remote branch and discuss best practices for setting up tracking branches.
\\ngit checkout -b
Start with the following:
\\ngit checkout -b <local branch name> <remote name>/<remote branch name>\\n\\n
Let’s break down these elements:
\\n-b
— This flag will create a new local branch<local branch name>
—This is the name of the local branchremote name>/<remote branch name>
— This specifies the remote branch you want to track with the local branch you intend to createHere’s an example:
\\ngit checkout -b develop origin/develop\\n\\n
Followed by the output:
\\ncheckout <remote name>/<remote branch name>
Unlike in method one, the command git checkout origin/develop
does not create a local branch for tracking a specific remote branch. Instead, Git enters a detached HEAD
state, meaning any changes you make won’t be associated with a branch and could be lost when switching branches.
If you find yourself in a detached HEAD
state but want to create a local branch from it, use:
git checkout -b <new-branch-name>\\n\\n
This creates a new local branch based on the detached HEAD
, ensuring that you can save and push your changes properly:
So, what are tracking branches? They’re local branches that maintain a direct connection to a corresponding remote branch. This setup ensures that when you run git pull
or git push
, Git knows exactly where to fetch changes from and where to push updates.
Using the recommended git checkout method tells Git to automatically set up a relationship between a local and remote branch, where the former automatically tracks the latter.
\\nIt’s always good practice to confirm that you’re on the intended/correct branch before pushing commits/changes. To verify your current branch, use the git branch
command.
This command lists all local branches, but the current branch is highlighted with an asterisk:
\\ngit status
works just like the git branch in showing your current branch. This command gives us more details, including the tracking information:
git branch -vv
checks if the branches are correctly tracking their corresponding remote branches. It basically shows us the tracking information for all the local branches:
To keep your local working tree up to date, you’ll need to merge your local branch with its remote counterpart. But how does this work in practice?
\\nWhen you set up a remote branch, Git maintains local copies of all remote branches. These copies reflect the state of the remote branch as of your last git pull
or git push
.
The git pull origin <remote-branch-name>
can be used. In context, this might look something like: git pull origin main
.
Regularly pulling changes is a best practice — it keeps your local branch in sync with the latest updates from the remote repository and helps prevent merge conflicts down the line.
\\nEven though Git is powerful, some things don’t happen automatically — like keeping your local list of remote branches updated. Here are a few common issues and how to fix them.
\\nGit doesn’t automatically update your local list of remote branches when new branches are added or deleted.
\\nTo solve this, run the following command to refresh the list and remove references to deleted branches:
\\ngit fetch -prune\\n\\n
The command updates the list and prunes the references to deleted branches, if any.
\\nIn collaborative environments, a remote branch may be deleted, but it could still appear in your local list.
\\nSimply remove stale remote branches with:
\\nbash\\nCopy code\\ngit remote prune origin\\n\\n
If you’re unsure whether a branch exists on the remote repository, check the available remote branches.
\\nTo solve this, list all remote branches with:git branch--r
.
To ensure a smooth experience when working with Git, particularly with remote branches, here are some best practices to follow:
\\nThis will ensure that your local repository reflects the latest changes from the remote and removes references to deleted branches.
\\nBefore adding new commits, it’s good practice to pull the latest changes from the remote. This ensures your local branch is in sync with the remote, reducing the chances of conflicts.
\\nThis is especially the case when you are in a collaborative environment. This helps everyone on the team understand the purpose of the branch without extra explanation.
\\nOne of the most important things about working with Git in a team is having one feature in one branch. It is good practice to ensure that every feature or bug has its remote branch, allowing multiple people to work on multiple features without worrying about overriding each other’s code. Also, this style keeps things organized.
\\nTo ensure the stability of your main or master branch, avoid committing directly to it. Always create a new branch for your changes and open a pull request to merge your feature branch into the main branch. This process helps prevent untested code from introducing bugs into the codebase.
\\nMastering the process of using git checkout
for remote branches is crucial for a smooth and efficient development workflow. By understanding how to check out, track, and update remote branches, you’ll have the tools you need to collaborate effectively, avoid common pitfalls, and streamline your Git practices.
This guide has provided an in-depth look at how to git checkout remote branch
, along with key strategies for managing your branches and working in a team. By implementing these best practices, you’ll keep your Git workflow organized, minimize merge conflicts, and enhance collaboration within your development environment.
With these Git strategies under your belt, you’re equipped to take your version control skills to the next level. Happy coding!
\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nNVM, or Node Version Manager, is a command-line tool that simplifies the process of managing multiple active Node.js versions. It lets you install, switch between, and manage different Node.js installations without conflicts.
\\nManaging Node.js versions can be challenging, especially when different projects require different Node.js versions. Directly installing Node.js globally can lead to conflicts and broken projects. NVM solves this by providing isolated Node.js environments for each project.
\\nBefore installing NVM, it’s recommended that any existing Node.js or npm installations be removed to avoid potential conflicts.
\\nThe NVM install process on Unix-based systems is straightforward:
\\ncurl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash\\n\\n
(Replace v0.40.1
with the latest version number if needed. Check the NVM GitHub page for the latest version.)
export NVM_DIR=\\"$HOME/.nvm\\"\\n[ -s \\"$NVM_DIR/nvm.sh\\" ] && \\\\. \\"$NVM_DIR/nvm.sh\\" # This loads nvm\\n[ -s \\"$NVM_DIR/bash_completion\\" ] && \\\\. \\"$NVM_DIR/bash_completion\\" # This loads nvm bash_completion\\n\\n
NVM also supports Windows, but it’s typically through WSL (Windows Subsystem for Linux), depending on the version of WSL. If that doesn’t work, you can try a separate project called NVM for Windows (nvm-windows):
\\nNow that NVM is installed, let’s explore how to use it to manage your Node.js versions:
\\nBefore installing any Node.js versions, it’s helpful to see what’s available. You can do this by using the following command:
\\nnvm ls-remote # On Unix systems\\nnvm list available # On Windows ( nvm-windows ) \\n\\n
The command will return a list of all the Node.js versions that NVM can install, including stable releases, LTS versions, and even older, less common versions.
\\nIf you are using nvm-windows make sure you run all nvm-windows commands in an Admin shell.
\\nNVM makes installing Node.js versions a breeze. You have several options, depending on your needs.
\\nTo install the absolute latest version of Node.js, use the following command:
\\nnvm install node # Unix\\nnvm install latest # Windows ( nvm-windows )\\n\\n
At the time of writing, the latest version of Node.js is 23.7.0
.
For production environments, stability is key. Long Term Support (LTS) versions of Node.js are recommended for their extended support and reliability. To install the latest LTS version, use :
\\nnvm install lts/* # Unix\\nnvm install --lts # Unix \\nnvm install lts # Windows ( nvm-windows )\\n\\n
At the time of writing, the latest LTS version of Node.js is 22.14.0
.
If your project requires a very specific Node.js version, NVM allows you to install any available version by specifying the version number. For example, to install version 18.12.0
, you would use the following command :
nvm install 18.12.0 # Works on both nvm and nvm-windows\\n\\n
To see which versions you have installed, run the following command:
\\nnvm list # Works on both nvm and nvm-windows\\n\\n
The command will return the installed versions, such as:
\\nv18.12.0\\n v22.14.0\\n-> v23.7.0\\ndefault -> lts/* (-> v22.14.0)\\niojs -> N/A (default)\\nunstable -> N/A (default)\\nnode -> stable (-> v23.7.0) (default)\\nstable -> 23.7 (-> v23.7.0) (default)\\nlts/* -> lts/jod (-> v22.14.0)\\n...\\n\\n
The output above shows that the following versions of node were installed: 18.12.0, 22.14.0, 23.7.0
. It also shows that the current version of Node.js being used in this shell is v23.7.0
, and the default version is v23.7.0
. The nvm and nvm-windows output should be somewhat similar.
To switch to a specific installed version, use the nvm use <version>
command, replacing <version>
with the desired version number. For instance, to switch to version 18.12.0
, you would run the following command:
nvm use 18.12.0 # Works on both nvm and nvm-windows\\n\\n
To switch to the LTS version, you would run the following command instead:
\\nnvm use lts/* # Unix\\nnvm use lts # Windows ( nvm-windows )\\n\\n
Beyond the basic installation and usage, NVM offers advanced features that streamline your Node.js version management even further. These features allow you to fine-tune your NVM setup and customize it to fit your development workflow perfectly.
\\nLet’s explore some of these more advanced capabilities:
\\nBeyond simply switching between versions, you can configure NVM to automatically use a specific Node.js version as the default for all new terminal sessions. This is particularly useful for streamlining your workflow and ensuring that your projects always use the correct Node.js version.
\\n\\nTo achieve this, use the nvm alias default <version>
command. Replace <version>
with the version number or alias (like lts/*
for the latest LTS) you want to use. For example:
nvm alias default 18.12.0\\n\\n
Or, for the latest LTS version:
\\nnvm alias default lts/*\\n\\n
After setting the default, new terminal windows or tabs will automatically use the specified Node.js version. Existing terminals might need to be restarted or sourced again for the change to take effect.
\\nPlease note that the nvm alias
command isn’t available in nvm-windows.
If you need to remove a Node.js version that you no longer use, NVM makes this easy.
\\nUse the nvm uninstall <version>
command, replacing <version>
with the version number you wish to remove. For example, to uninstall the Node.js version 23.7.0
using both NVM and nvm-windows you would run:
nvm uninstall 23.7.0 # Works on both nvm and nvm-windows\\n\\n
Managing Node.js versions on a per-project basis is crucial for ensuring compatibility and avoiding conflicts. NVM simplifies this with the .nvmrc
file. By placing a .nvmrc
file in the root directory of your project, you can specify the required Node.js version for that project.
To use this feature, simply create a file named .nvmrc
in your project’s root directory. Inside this file, put the Node.js version you need. For example, if your project requires Node.js 18.12.0
, your .nvmrc
file would contain:
18.12.0\\n\\n
Or, if you prefer to use the latest LTS version:
\\nlts/*\\n\\n
When you navigate into a directory containing a .nvmrc
file, you can then run nvm use
. NVM will read the .nvmrc
file and automatically switch to the specified Node.js version. This makes collaboration easier, as anyone cloning the project can simply run nvm use
to use the correct Node.js version.
Please note that this feature isn’t available in nvm-windows.
\\nIn situations where the default Node.js download server is slow, unavailable, or you have specific mirroring requirements, the NVM_NODE_MIRROR
environment variable comes in handy. This variable allows you to specify an alternative URL from which NVM will download Node.js releases.
To use this, set the NVM_NODEJS_ORG_MIRROR
environment variable before installing Node.js versions. For example, in your shell configuration file (e.g., .bashrc
, .zshrc
), you might add:
export NVM_NODE_MIRROR=https://your-mirror.example.com/node/\\n\\n
Replace https://your-mirror.example.com/node/
with the actual URL of your preferred mirror. After setting this variable, NVM will use the specified mirror when downloading Node.js versions.
If you are using nvm-windows run the following command instead:
\\nnvm node_mirror https://your-mirror.example.com/node/\\n\\n
Sometimes, you might need to run a single command in the context of a specific Node.js version without changing the default version for your current shell. The nvm exec
command is designed for this purpose.
The syntax is as follows:
\\nnvm exec <version> -- <command>\\n\\n
Replace <version>
with the Node.js version you want to use, and <command>
with the command you want to execute. For example, to install project dependencies using npm with Node.js version 23.7.0
, you could use:
nvm exec v23.7.0 -- npm install\\n\\n
This will run npm install
using Node.js 23.7.0
, without affecting the Node.js version used by your current shell for other commands. This is very useful for running specific scripts or tools that require a different Node.js version than your project’s primary one.
Please note that the nvm exec
command isn’t available in nvm-windows.
While NVM generally works smoothly, you might occasionally encounter some issues. This section covers common problems and provides solutions to help you get NVM working correctly.
\\nBefore installing NVM, it’s crucial to remove any existing Node.js and npm installations. These pre-existing installations can conflict with NVM, leading to unexpected behavior and errors.
\\nOn Unix systems, this might involve using your distribution’s package manager (e.g., apt
, yum
, pacman
, Homebrew
) to remove Node.js and npm.
On Windows, uninstall via the Control Panel.
\\nIf the nvm
command isn’t recognized after installation, it usually means that NVM’s directory isn’t in your system’s PATH. To fix that:
nvm
command again. If that does not work, run the following commands for the different shells on the command line: source ~/.bashrc
(bash), source ~/.zshrc
(zhs). If the issue persists on macOS, check this linkOccasionally, you might encounter permission issues during NVM installation or when installing Node.js versions:
\\nsnap
, you will have to uninstall it and install it using apt
If you try to install or use a version of Node.js that NVM can’t find, it means that version isn’t available in the NVM repository.
\\nUse nvm ls-remote
to see the complete list of available versions. Double-check the version number for typos.
To ensure a smooth and efficient experience with NVM, it’s helpful to follow some best practices. These guidelines will help you avoid potential problems and make the most of NVM’s features:
\\nIt’s strongly recommended that NVM be installed on a per-user basis, rather than globally on a shared system. This helps to isolate Node.js environments for different users and avoid conflicts.
\\nAvoid using NVM in shared environments or on build servers where multiple users or processes might be using the same NVM installation. This can lead to issues with symbolic links and unpredictable behavior.
\\nConsider using containerization technologies like Docker to manage Node.js versions in shared or automated environments.
\\nKeeping NVM updated is important. New versions often include bug fixes, performance improvements, and support for the latest Node.js releases.
\\nYou can typically update NVM itself using a similar process to the initial installation. Consult the NVM GitHub page for update instructions.
\\nFor production environments, it’s generally best to use LTS versions of Node.js. LTS versions are supported for an extended period and receive critical bug fixes and security updates, ensuring stability for your applications.
\\nUse nvm install lts/*
to install the latest LTS.
NVM simplifies Node.js version management, allowing you to switch between different versions and avoid conflicts easily. By following this tutorial, you can effectively manage your Node.js environments and ensure your projects run smoothly.
\\nExplore the NVM git repository for more advanced features and options.
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nsort()
array method in JavaScript\\n sort()
method’s compare function\\n toSorted
array method in JavaScript\\n sort
and toSorted
methods\\n Data is most valuable when it’s well-organized, easy to read, and easy to understand. As a developer, you might need to sort products by price or arrange names alphabetically to quickly extract relevant information.
\\nThere are two primary array sorting methods in JavaScript: the Array.prototype.sort
method and the Array.prototype.toSorted
method. Both of these techniques have built-in sorting behavior, and are equally customizable. In this article, we’ll cover everything from the different use cases for each method, default sorting behavior, and how you can customize the sorting methods to meet your needs.
Then, we’ll learn how you can use the two sorting methods with the localeCompare
string method and the Intl.Collator
object for language-sensitive string array sorting.
sort()
array method in JavaScriptThe sort()
method is the primary sorting method in JavaScript. It takes an optional compare function as an argument. The JavaScript sort()
method sorts the array accordingly before returning a reference to the original array. The key thing to note here is that the sort()
method simply mutates the original array rather than creating a new array, as you can see here:
js\\narray.sort(compareFunction)\\n\\n
The compareFunction
determines our sorting behavior. If you don’t pass a compare function as an argument, this function will first convert the elements into strings and then use the numeric values of the UTF-16 code units to sort the elements in ascending order. Let’s look at the example below:
js\\nconst array = [20, 123, \\"🥚\\", \\"🐔\\"];\\nconsole.log(array.sort());\\n\\n
If you’re still new to JavaScript, the results of running the code above might surprise you. Let’s take a look at the result of the sorted array:
\\njs\\nconst sortedArray = [ 123, 20, \'🐔\', \'🥚\' ];\\n\\n
See how 123
has been sorted before 20
, even though 123
is a bigger number? This is because we invoked the sort()
method without a callback. With no callback, elements are first coerced to strings before being sorted in ascending order of the UTF-16 code units of its constituent characters.
The first element in the unsorted array is 20
. When it is coerced to a string, the first character in the resulting string is \\"2\\"
and the second character is \\"0\\"
. The Unicode code point assigned to the character \\"2\\"
is U+0032(50 in decimal), and for \\"0\\"
is U+0030(48 in decimal).
Because they both lie in the basic multilingual plane (BMP), both code points are encoded as single 16-bit code units in UTF-16 character encoding:
\\njs\\nconst array = [20, 123, \\"🥚\\", \\"🐔\\"];\\n\\nconst string = array[0].toString(); // 20\\nconsole.log(string.length); // 2\\n\\nconsole.log(string.charCodeAt(0)); // 50\\nconsole.log(string.charCodeAt(1)); // 48\\n\\n
Similarly, the second element in our unsorted array is 123
. When we coerce the integer 123
to a string, the first character in the resulting string will be \\"1\\"
, the second character will be \\"2\\"
, and the third character will be \\"3\\"
.
The Unicode code point assigned to the character \\"1\\"
is U+0031(49 in decimal), for the character \\"2\\"
is U+0032(50 in decimal), and for \\"3\\"
is U+0033(51 in decimal).
All three code points lie in the BMP like before. Therefore, each one of them is encoded as a single 16-bit code unit in the UTF-16 character encoding:
\\njs\\nconst array = [20, 123, \\"🥚\\", \\"🐔\\"];\\n\\nconst string = array[1].toString();\\nconsole.log(string.length); // 3\\n\\nconsole.log(string.charCodeAt(0)); // 49\\nconsole.log(string.charCodeAt(1)); // 50\\nconsole.log(string.charCodeAt(2)); // 51\\n\\n
Similar to integers, all other non-undefined JavaScript primitives are first coerced to strings when you invoke the sort()
method without a callback. All undefined
values and empty slots are sorted to the end of the array.
The last two entries in our unsorted example array are both emojis. Since they are already strings, they are not coerced to strings like the other JavaScript primitives.
\\nThe Unicode code point assigned to the egg emoji, \\"🥚\\"
, is U+1F95A(129370 in decimal) and that for the chicken emoji, \\"🐔\\"
, is U+1F414(128020 in decimal). Both code points lie in the supplementary plane outside the BMP.
In UTF-16 character encoding, Unicode code points in the supplementary plane cannot be encoded using single 16-bit code units. As a result, they’re encoded using double 16-bit code units referred to as surrogate pairs. This explains why the length of the strings for both \\"🥚\\"
and \\"🐔\\"
are 2:
js\\nconsole.log(\\"🥚\\".length === 2) // true\\nconsole.log(\\"🐔\\".length === 2) // true\\n\\n
The first UTF-16 code unit in a surrogate pair is referred to as the leading or higher surrogate, and the second is referred to as the trailing or lower surrogate.
\\n\\nThe code point assigned to the egg emoji is U+1F95A(129370 in decimal). In UTF-16 character encoding, this will be encoded into its respective leading and trailing surrogate pairs U+D83E(55358 in decimal) and U+DD5A(56666 in decimal):
\\njs\\nconst array = [20, 123, \\"🥚\\", \\"🐔\\"];\\n\\nconst eggEmoji = array[2];\\n\\nconsole.log(eggEmoji.charCodeAt(0)); // 55358\\nconsole.log(eggEmoji.charCodeAt(1)); // 56666\\n\\n
Similarly, the Unicode code point assigned to the chicken emoji is U+1F414(128020 in decimal). In UTF-16 character encoding, this will be encoded into its respective leading and trailing surrogate pairs U+D83D (55357 in decimal) and U+DC14(56340 in decimal):
\\njs\\nconst array = [20, 123, \\"🥚\\", \\"🐔\\"];\\n\\nconst chickenEmoji = array[3];\\n\\nconsole.log(chickenEmoji.charCodeAt(0)); // 55357\\nconsole.log(chickenEmoji.charCodeAt(1)); // 56340\\n\\n
After coercing non-string values to strings as described above, the UTF-16 code units of the characters in the respective strings are used for sorting the elements of the array in ascending order.
\\nIn our example array above, the first element of the array is 20
. The UTF-16 code unit of its first character \\"2\\"
is 50 when coerced to a string.
The second element of our array is 123
. The UTF-16 code unit of its first character \\"1\\"
when coerced to a string is 49.
The third and fourth characters are the egg and chicken emojis, respectively. The leading surrogate for the egg emoji is 55358 in decimal, and 55357 for the chicken emoji.
\\nFinally, JavaScript will use these code units to sort the elements of the array in ascending order. Therefore, JavaScript doesn’t use the actual characters but rather the numerical values of their UTF-16 code units. The lowest code unit in our example above is 49, followed by 50, 55357, and 55358.
\\nThat explains why 123
comes before 20
in our sorted array, despite 123
being greater. Then the chicken and egg emojis come into the third and fourth positions, respectively.
If your array is sparse or has undefined
elements, the empty slots and undefined
elements are sorted to the end of the array. The empty slots are sorted after all the undefined
elements.
Let’s now sort our example array after turning it into a sparse array by adding an empty slot and an undefined
value, as seen in the example below:
js\\nconst array = [20, undefined, , 123, \\"🥚\\", \\"🐔\\"];\\nconsole.log(array.sort()); // [ 123, 20, \'🐔\', \'🥚\', undefined, <1 empty item> ]\\n\\n
The empty slot and the undefined
element are now sorted to the end of the array when you execute the code above.
We just explored JavaScript’s sort()
method when invoked without a callback. However, this isn’t the only way you have to sort; you can customize this behavior by passing a callback function.
The callback function will determine whether to sort the array in ascending or descending order. In the next section, we’ll get into how to use the callback function.
\\nTo recap the previous section, the JavaScript sort()
array method takes an optional compare function as a callback. If you don’t pass a callback, the array’s elements will be coerced to strings and are sorted in ascending order of the numerical values of their UTF-16 code units.
However, the default JavaScript sorting behavior is rarely what you need. More often than not, you’d want to custom-sort elements in ascending or descending order.
\\nIf you want custom sorting behavior, you must pass a compare function as a callback. The compare function takes two values as arguments. It will only be invoked with non-undefined values. All undefined
values and empty slots will be sorted to the end of the array.
The compare function should return a negative number if the first argument is to be sorted before the second argument. It should return a positive number if the second argument is to be sorted before the first, and a zero if they are equal and their original order is to be maintained.
\\nThe example code below gives us an idea of what the compare function should look like:
\\njs\\nconst compareFunction = (arg1, arg2) => {\\n if (\\"arg1 is to be sorted before arg2\\") {\\n return \\"a negative number\\";\\n } else if (\\"arg1 is to be sorted after arg2\\") {\\n return \\"a positive number\\";\\n }\\n return \\"zero to maintain the original order of arg1 and arg2\\";\\n}\\n\\n
To sort an array of numbers in ascending order, your compare function should look something like the code below:
\\njs\\nconst array = [23, -45, 78];\\narray.sort((a, b) => {\\n if(a < b) return -1;\\n if(a > b) return 1;\\n return 0;\\n})\\n\\n
The returned values from the compare function don’t necessarily have to be -1
and 1
. You can return any negative or positive value. You can make the above sorting function more succinct, like so:
js\\nconst array = [23, -45, 78];\\narray.sort((a, b) => a - b);\\n\\n
We can apply a similar logic to sort an array of numbers in descending order by switching the comparison operator that we see in the if
conditions above. Your compare function should then look like this:
js\\nconst array = [23, -45, 78];\\narray.sort((a, b) => {\\n if(a > b) return -1;\\n if(a < b) return 1;\\n return 0;\\n})\\n\\n
You can also write the above compare function like this:
\\njs\\nconst array = [23, -45, 78];\\narray.sort((a, b) => b - a);\\n\\n
The sort()
method doesn’t just sort number arrays. You can use it to sort an array of primitive values, objects, or a mix of both primitive values and objects.
As an example, let’s take an array of objects sorted in alphabetical order by name
property:
js\\nconst students = [\\n { name: \\"Aggy Doe\\", age: 19 },\\n { name: \\"Jane Doe\\", age: 16 },\\n { name: \\"Kent Doe\\", age: 14 },\\n { name: \\"Mark Doe\\", age: 19 },\\n];\\n\\n
We can use the sort()
method to sort the above objects in ascending order of the age
property, like so:
js\\nstudents.sort((a, b) => a.age - b.age);\\n\\n
Our sorted students
array is all set. You’ll notice that two students are the same age. If you’re using ECMAScript version 10(2019) and above, JavaScript guarantees sort stability. Earlier versions of ECMAScript do not guarantee sort stability.
In a stable sort, the original order is maintained if there is a tie. We can see that in our students
array, where the original order is maintained even for our students of the same age:
js\\nconst students = [\\n { name: \\"Kent Doe\\", age: 14 },\\n { name: \\"Jane Doe\\", age: 16 },\\n { name: \\"Aggy Doe\\", age: 19 },\\n { name: \\"Mark Doe\\", age: 19 },\\n];\\n\\n
As explained above, sort
does not create a new array; it mutates the original array. If you don’t want to mutate the original array, first create a new array before sorting or use the toSorted
array method instead. In the next section, we’ll take a look at the toSorted
method.
In the example below, we’re using structuredClone
to create a copy of the original array before sorting. You can also use other built-in features like spread syntax to clone the original array:
js\\nconst clone = structuredClone(students);\\nclone.sort((a, b) => a.age - b.age);\\n\\n
sort()
method’s compare functionThe compare callback function in the sort()
and toSorted
methods must conform to the properties below to carry out custom sorting behaviors. If your compare function doesn’t conform to these properties, then it is not well defined, meaning its behavior will vary unpredictably across JavaScript engines.
The callback you pass to the sort()
function should be a pure function. If you invoke it with the same pair of arguments, it should always return the same output.
The function’s output should be determined exclusively by its input, without relying on external state or variables. The function must also avoid side effects, such as modifying external state or performing I/O operations.
\\nThe compare function must be pure because there is no guarantee of how or when it’ll be invoked.
\\nYour compare function should be reflexive. If both arguments are the same, the compare function should return 0
so that the original order of the entries is maintained in the sorted array. Check out an example below:
js\\nconsole.log(compareFunction(a, a) === 0); // true\\n\\n
The compare function should be anti-symmetric. This means the return values of invoking compareFunction(a, b)
and compareFunction(b, a)
should be of opposite signs or zero.
Finally, the compare function callback should be transitive. If compareFunction(a, b)
and compareFunction(b, c)
are both positive, negative, or zero, then compareFunction(a, c)
has the same positivity as the other two.
toSorted
array method in JavaScriptWe’ve done a deep dive of the sort()
method. However, one of the sort()
method’s limitations is that it mutates the original array, forcing you to make a copy of the original array beforehand if you want to keep the array as is.
As a workaround for that hassle, you can also use the handy built-in toSorted
array method. Unlike sort
, the toSorted
JavaScript array method doesn’t mutate the original array. Instead, it returns a new array.
The toSorted
method is similar to sort
. If you don’t invoke it with a compare callback, the elements are coerced to strings, and the UTF-16 code units of the resulting strings are used to sort the elements in ascending order:
js\\nconst array = [20, 123, \\"🥚\\", \\"🐔\\"];\\nconst sortedArray = array.toSorted();\\n\\nconsole.log(array === sortedArray); // false\\nconsole.log(sortedArray); // [ 123, 20, \'🐔\', \'🥚\' ]\\n\\n
To implement custom sorting in the toSorted
method, pass a compare function, just as with the sort()
method. This function takes two arguments and should return a negative value if the first argument comes before the second, a positive value if it comes after, and zero if their order remains unchanged.
In the example below, we’re sorting the array of numbers in ascending order:
\\njs\\nconst array = [20, 123, -67];\\nconst sortedArray = array.toSorted((a, b) => a - b);\\n\\nconsole.log(array === sortedArray); // false\\nconsole.log(sortedArray); // [ -67, 20, 123 ]\\n\\n
Like with sorted
, the toSorted
method sorts undefined
elements and empty slots to the end of the array. The compare function is not invoked for the undefined
elements and the empty slots.
Be aware that toSorted
is a fairly new method. Therefore, it might not be widely supported across web browsers and JavaScript runtime environments.
JavaScript has built-in locale-aware functions that you can use to sort string arrays. These functions are the Intl.Collator
object and the String.prototype.localeCompare
method.
The primary function of the Intl.Collator
object is for language-sensitive string comparison. Different languages have their own rules for string comparison. You can use Intl.Collator
for locale-sensitive sorting of string arrays like in the example below:
js\\nconst swedishCollator = new Intl.Collator(\\"sv\\");\\nconst germanCollator = new Intl.Collator(\\"de\\");\\n\\nconst data = [\\"Z\\", \\"a\\", \\"z\\", \\"ä\\"];\\nconsole.log(data.toSorted(swedishCollator.compare)); // [ \'a\', \'z\', \'Z\', \'ä\' ]\\nconsole.log(data.toSorted(germanCollator.compare)); // [ \'a\', \'ä\', \'z\', \'Z\' ]\\n\\n
The Intl.Collator.prototype.compare
method takes a pair of strings you want to compare as arguments and returns a negative number if the first argument comes before the second, a positive number if the first argument comes after the second, and a zero if they’re equal. That’s why you can pass it as an argument to either the [Array.prototype.sort](https://blog.logrocket.com/guide-four-new-array-prototype-methods-javascript/)``()
method or Array.prototype.toSorted
method.
You can also customize sort behavior by passing an optional second argument to Intl.Collator
. Check the documentation to learn more.
Similarly, you can also use the String.prototype.localeCompare
method to sort string arrays. It returns a number indicating whether this
string comes before, after, or is the same as the provided string:
js\\nconst referenceString = \\"Jane Doe\\";\\nconsole.log(referenceString.localeCompare(\\"Chris Doe\\"));\\n\\n
If it returns a negative number, this
string comes before the comparison string. If it returns a positive number, this
comes after the comparison string, and a zero means this
is equivalent to the comparison string.
Therefore, you can also use the localeCompare
method with sort
or toSorted
methods to sort an array of strings in alphabetical order:
js\\nconst names = [\\n \\"Jane Doe\\",\\n \\"Kim Doe\\",\\n \\"Chris Doe\\",\\n \\"Mia Doe\\",\\n]\\n\\nnames.sort((a, b) => a.localeCompare(b))\\nconsole.log(names)\\n\\n
For browsers or runtime environments that implement the Intl.Collator
API, the localeCompare
method internally uses Intl.Collator
.
sort
and toSorted
methodsIn the previous sections, we used sort
and toSorted
as array instance methods. However, as with most array methods, both are generic functions. Their use is not limited to arrays.
You can use them to sort array-like objects. They only require the array-like object to have a length property and integer-keyed properties like arrays as in the example below:
\\njs\\nconst object = {\\n 0: 90,\\n 1: 34,\\n 2: -45,\\n 3: 12,\\n length: 4,\\n};\\n\\nArray.prototype.sort.call(object, (a, b) => a - b);\\nconsole.log(object); // { \'0\': -45, \'1\': 12, \'2\': 34, \'3\': 90, length: 4 }\\n\\n
The example above sorts the given object in ascending order. You can also sort it in descending order by modifying the compare callback. The sort()
method mutates the original object and returns a reference to it.
If you don’t want to mutate the original object, you can use the toSorted
method similarly. Unlike sort
, the toSorted
method returns a new array, not an object:
js\\nconst object = {\\n 0: 90,\\n 1: 34,\\n 2: -45,\\n 3: 12,\\n length: 4,\\n};\\n\\nconst sortedObject = Array.prototype.toSorted.call(object, (a, b) => a - b);\\n\\nconsole.log(sortedObject); // [ -45, 12, 34, 90 ]\\nconsole.log(sortedObject === object); // false\\n\\n
The sort
and toSorted
methods are built-in tools for sorting JavaScript arrays. By default, they convert elements to strings and sort them by UTF-16 code units, but you can customize their behavior using a compare function. The key difference is that sort
mutates the original array, while toSorted
returns a new one.
We’ve explored the more detailed alternatives for custom sorting behavior, which you can activate by comparing callback functions to modify the default sorting behavior. You can use the callback function to sort arrays of JavaScript primitives and objects using a certain criterion.
\\nBoth methods are generic and work on array-like objects with a length property and integer-keyed properties. Choosing between them depends on whether you want to modify the original array or keep it unchanged. Use the sort()
method if you want to mutate the original array and toSorted
if you don’t want to mutate the original array.
Would you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nOver 2 million merchants use Shopify to build and operate their online stores, from small mom-and-pops to global brands. While these merchants have had access to myriad tools — including LogRocket — on their product pages, that visibility was nowhere to be seen for the most vital portion: checkout.
\\nThis leaves the checkout process itself as a black box. Why do users not complete a purchase after adding items to their cart? Was it blocked by a technical error? Did the customer try to apply an incorrect promo code? Did they use an expired credit card?
\\nToday we’re excited to announce that Shopify site owners now have access to answer all those questions and more with LogRocket for Shopify Checkout. LogRocket already supports major retailers such as Costco, Saks Off 5th, and Valentino. Now, Shopify merchants will get full session replay, to see what their users are experiencing, along with AI-funnel insights and analytics — proactively surfacing opportunities for the most impactful ways to improve eCommerce revenue.
\\nToday, with the launch of LogRocket for Shopify Checkout, Shopify merchants can finally pull back the curtain on their checkout experiences. LogRocket’s AI-first session replay and analytics solution captures complete session replays and events, providing an end-to-end picture of the customer journey:
\\nFor the first time, Shopify merchants can visualize the exact friction points and issues that cause customers to drop out of the checkout process, and make data-driven decisions to reduce abandonment rates and increase conversion.
\\nTo improve conversion, LogRocket’s Galileo AI watches every session replay for you, highlighting the most important issues and opportunities to improve conversion. These opportunities and behavior patterns are summarized in natural language, prioritized by impact, and sent to you — so that merchants can spend their time building great businesses, rather than watching hour after hour of session replays to diagnose where users run into trouble:
LogRocket for Shopify Checkout is available now in the Shopify App Store. Shopify merchants with an existing LogRocket account can add session replay to their checkout flow for free. If you’re new to LogRocket, sign up for free today.
\\n\\n\\n\\nRedis is one of the most popular distributed in-memory data store systems. Over the last few years, many developers have been using it not only as a NoSQL database but also as a performant cache and message queue. Thanks to its design, Redis offers low-latency reads and writes, which makes it a very widely used technology in modern programming.
\\nThe technology is so popular that many cloud providers, including Amazon AWS, have been using it and offering it to their customers.
\\nHowever, in March 2024, Redis announced a shift in its license model1:
\\n\\nBeginning today, all future versions of Redis will be released with source-available licenses. Starting with Redis 7.4, Redis will be dual-licensed under the Redis Source Available License (RSALv2) and Server Side Public License (SSPLv1). Consequently, Redis will no longer be distributed under the three-clause Berkeley Software Distribution (BSD).
As a consequence, some corporate contributors, including Amazon AWS and Google Cloud, announced an open source fork, Valkey, based on the last open source version of Redis (7.2.4).
\\nAt the time of writing, Redis and Valkey have several overlaps in their features. Nonetheless, the two technologies are developed and backed by different teams. Valkey 8.0, released in September 2024, comes with several key differences from Redis. In the future, we can expect more divergence.
\\nIn this article, we’ll compare Valkey and Redis, highlighting their differences with special attention to performance, pricing, support, and observability.
\\nAs we saw above, the differences between Valkey and Redis have been growing with each release.
\\nBeing that Valkey is a fork of Redis, the performance of the two is similar. With its 8.0.0 release, however, Valkey has pushed its limits even further. In particular, an optimization of the way the SUNION
and SDIFF
commands handle temporary set objects resulted in a 41% and 27% performance boost, respectively2.
Additionally, Valkey now supports multi-threading for input/output and command execution. Redis, on the other hand, is still single-threaded for most operations.
\\nValkey also recently implemented experimental support for RDMA (remote direct memory access), which Redis still lacks. RDMA enables nodes in a network to exchange data in the main memory without involving the CPU, cache, or operating system of any node. This means RDMA has better performance than TCP.
\\nSo far, the tests on Valkey Over RDMA have shown a ~2.5x boost in the queries per second QPS and lower latency.
\\n\\nLastly, Valkey 8.1 will introduce a new implementation of the dictionary, which is more memory and cache-efficient. This is done via a new memory-efficient implementation of the hash table used to store Valkey keys.
\\nValkey and Redis support the same persistence strategies:
\\nChoosing a persistence strategy can be difficult, as each comes with tradeoffs, and each strategy can be configured as needed. In Valkey, we have to choose – and configure – one.
\\nRedis, on the other hand, offers a paid alternative, named Redis Enterprise, that offers six built-in persistence options:
\\nTherefore, Redis Enterprise offers simplified strategies if we don’t have specific requirements. Other than that, Redis and Valkey have no relevant differences in this area.
\\nBoth Redis and Valkey come with several metrics we can use to evaluate and tweak their behavior. For example, both allow us to monitor the latency in the system to track and troubleshoot potential latency issues.
\\nMore generally, both systems offer an INFO
command that returns information on the system. It includes, among other data, the latency (as we saw above), the server and replication configuration, memory and CPU usage, and error statistics. This is the way cloud providers (such as Amazon AWS) collect metrics for the Redis and Valkey systems they manage.
Valkey, however, recently introduced per-slot metrics. Cluster hash slots are the way Redis Cluster and Valkey Cluster manage how data is partitioned within the cluster. For example, there are 16,384 slots in a Redis Cluster and each key is uniquely mapped to only one of them. In particular, version 8.0 introduced the CLUSTER SLOT-STATS
command, returning usage statistics for the slots assigned to the current cluster shard. At the time of writing, this includes the following metrics:
Furthermore, there are plans to add more memory-related information in Valkey 8.2.
\\nRedis, on the other hand, does not provide any statistics on the slots of a cluster. This makes Valkey a preferable choice if we need fine-grained observability of our systems.
\\n\\nValkey was proposed and is actively backed by many cloud providers, including Amazon AWS and Google Cloud. Furthermore, developers from Oracle, Ericsson, and Snap Inc. are known to be contributing to Valkey to enhance its performance, scalability, and integration among different environments.
\\nRedis is mainly maintained by Redis Inc., which drives its development and commercial support.
\\nGoing forward, we can expect more features and smoother cloud integration for Valkey rather than Redis.
\\nPricing largely depends on the cloud provider or on the enterprise solution we’re buying. Generally speaking, Valkey is cheaper than Redis on Amazon AWS and Google Cloud.
\\nFor example, according to the AWS Pricing Calculator, a 3-node cache.r6g.8xlarge cluster (a fairly large one) costs 20% less with Valkey than with Redis — $6,419.33/month vs. $8,024.16/month, respectively, in the Ireland region.
\\nOther AWS nodes or Google Cloud pricing have similar differences. For organizations running large clusters, this means massive savings each month.
\\nBoth Redis and Valkey offer the same basic set of features. For example, we can use either of them to implement a message queue (using the LPUSH
and LPOP
commands), a cache, and/or a NoSQL database (using the SET
and GET
commands, possibly with an expiration time).
At the time of writing, Valkey hasn’t introduced relevant improvements in the feature set it offers. The last releases have mainly focused on the internal implementation to improve performance (such as, as we saw above: enhanced memory efficiency and support for asynchronous I/O handling).
\\nIf you have more complex use cases, or if the data structures you’re working with are complex, make sure to test your applications with Valkey before committing to a migration. You can do that either at an infrastructural level (which is more expensive; see below), or by using Docker to spin up a disposable Valkey container on your local computer.
\\nAs we saw above, Valkey forked from Redis 7.2.4. Therefore, the first step of the migration should be to update our infrastructure/Redis clients to use Redis 7.2.4. This way we can test our applications with the latest Redis version before the fork.
\\nAfter that, we can deploy a Valkey instance, which will act as a replacement for the existing Redis cluster. If our infrastructure hosts customer-facing applications, it is paramount that Valkey and Redis can coexist. This way, we can test our workloads without affecting the customers. Furthermore, we can export Redis data using the redis-cli save
command, which creates a .rdb
file. We can then import it into Valkey, possibly with a few adjustments in the data structures and configuration.
The actual validation of the Valkey instance largely depends on what we use Redis for. Generally speaking, we should verify that Valkey supports all the workflows we are using Redis for (e.g., the type of all the keys we use is supported).
\\nThe last step of the migration is deleting the old Redis cluster.
\\nThe migration process might be more challenging depending on your requirements. Some Redis features might not be available in Valkey (yet), and developers might need time to adapt to the new tool. Lastly, fine-tuning Valkey settings might take a while. Until then, the performance might not be optimal.
\\nIn this article, we analyzed both Valkey and Redis from different points of view. Since Valkey was announced (and forked from Redis), the hype in the community has been growing, and so has its usage among many companies.
\\nThe question “Should I migrate?” is a difficult one to answer. Based on the comparison above and considering Valkey’s backers, the immediate answer would probably be “Yes!” But be careful, because all that glitters is not gold. Valkey is still fairly new, and we don’t know much about its future.
\\nTherefore, before committing to one side or the other, consider the following aspects:
\\nAsk yourself all those questions, deliberate on the answers, and then decide whether or not to migrate. In any case, both solutions are robust and will help you handle large amounts of data efficiently in your applications.
\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nGraphQL and REST are the two most popular architectures for API development and integration, facilitating data transmissions between clients and servers. In a REST architecture, the client makes HTTP requests to different endpoints, and the data is sent as an HTTP response, while in GraphQL, the client requests data with queries to a single endpoint.
\\nIn this article, we’ll evaluate both REST and GraphQL so you can decide which approach best fits your project’s needs.
\\nEditor’s note: This article was last updated by Temitope Oyedele in March 2025 to include decision-making criteria for when to use GraphQL vs. REST, as well as to update relevant code snippets.
\\nREST (Representational State Transfer) is a set of rules that has been the common standard for building web API since the early 2000s. An API that follows the REST principles is called a RESTful API.
\\nA RESTful API helps structure resources into a set of unique uniform resource identifiers (URIs), which serve as addresses for different types of resources on a server. The URIs are used in combination with HTTP verbs, which tell the server what we want to do with the resource.
\\nThese verbs are the HTTP methods used to perform CRUD (Create, Read, Update, and Delete) operations:
POST
: Means to createGET
: Means to readPUT
: Means to updateDELETE
: Means to deleteSome requests, like POST and PUT, sometimes include a JSON or form-data payload that contains server-side information. The server processes the request and responds with an HTTP status code that indicates the outcome, which, most of the time, can include a response body containing data or details.
\\nThe HTTP status codes are as follows:
\\n200-level
: A request was successful400-level
: Something was wrong with the request500-level
: Something is wrong at the server levelGraphQL is a query language developed by Meta. It provides a schema of the data in the API and gives clients the power to ask for exactly what they need.
\\nGraphQL sits between the clients and the backend services. One cool thing about GraphQL is that it can aggregate multiple resource requests into a single query. It also supports mutations, which are GraphQL’s way of applying data modifications, and subscriptions, which are GraphQL’s way of notifying clients about data modifications during real-time communications:
\\nREST centers around resources, each identified by a unique URL. For example, to fetch a single book resource, you might do:
\\nGET /api/books/123\\n\\n
The response might look like this:
\\n{\\n \\"title\\": \\"Understanding REST APIs\\",\\n \\"authors\\": [\\n {\\n \\"name\\": \\"John Doe\\"\\n },\\n {\\n \\"name\\": \\"Anonymous\\"\\n }\\n ]\\n}\\n\\n
Some APIs can split related data into separate endpoints. For example, a different request might fetch the authors instead of including them in the main book response. The exact design depends on how the API is structured.
\\nGraphQL, on the other hand, uses a single endpoint (e.g., /graphql) and lets clients query exactly the data they need in one request. You start by defining types, and then the client sends a query describing which fields to fetch. For example, after defining your Book
and Author
types, a query for the same book data could look like this:
query {\\n book(id: \\"123\\") {\\n title\\n authors {\\n name\\n }\\n }\\n}\\n\\n
The response contains only the requested fields:
\\n{\\n \\"data\\": {\\n \\"book\\": {\\n \\"title\\": \\"Understanding GraphQL APIs\\",\\n \\"authors\\": [\\n {\\n \\"name\\": \\"John Doe\\"\\n },\\n {\\n \\"name\\": \\"Anonymous\\"\\n }\\n ]\\n }\\n }\\n}\\n\\n
This approach reduces over-fetching and under-fetching since the client decides exactly which fields to request.
\\nREST uses HTTP status codes for error handling and relies on standard HTTP methods (GET, POST, PUT, DELETE). It provides a variety of API authentication and encryption mechanisms, such as TLS, JWTs, OAuth 2.0, and API keys.
\\nGraphQL uses a single endpoint and requires safeguards like query depth limiting, introspection control, and authentication to prevent abuse. While simpler in some ways, it can introduce complexity. As a developer, you’ll need to come up with some authentication and authorization methods to prevent performance issues and denial-of-service (DoS) caused by introspection.
\\nREST has a rigid structure, as it can return unwanted data when over-fetched and insufficient data when under-fetched. This means you might need to make multiple calls, which increases the time required to retrieve the necessary information.
\\n\\nGraphQL, on the other hand, allows clients a lot of flexibility by giving the client exactly what is requested with a single API call. The client specifies the structure of the requested information and the server returns just that. This eliminates over-fetching and under-fetching issues and makes data fetching more efficient.
\\nRESTful APIs adopt versioning to manage modifications on data structures and deprecations in order to avoid system failures and service disruptions for end users. This means you need to build versions for every change or update that you make, and if the number of versions grows, maintenance can become difficult.
\\nGraphQL, on the other hand, reduces the need for versioning as it has a single versioned endpoint. It allows you to define your data requirements in the query. GraphQL manages updates and deprecations by updating and extending the schema without explicitly versioning the API.
\\nREST is widely used across several industries. For example, platforms like Spotify and Netflix use RESTful APIs to access media from remote servers. Companies like Stripe and PayPal use REST to securely process transactions and manage payments. Other companies that use REST include Amazon, Google, and Twilio.
\\nGraphQL’s popularity has grown in recent years and is now being used by companies and organizations. For example, GraphQL is using Meta, its creator, to solve the inefficiencies of RESTful APIs. Samsung also uses it for its customer engagement platform. Other companies that use GraphQL include Netflix, Shopify, Twitter, etc.
\\nBecause REST APIs are poorly typed, you need to implement error handling. This means using HTTP status codes to indicate the status or success of a request. For example, if a resource is not found, the server returns 404, and if there’s a server error, it returns a 500 Error.
\\nGraphQL, on the other hand, always returns a 200 ok
status for all requests regardless of whether they resulted in an error. The system communicates errors in the response body alongside the data, which requires you to parse the data payload to determine whether the request was successful.
REST doesn’t inherently provide type definitions, making it prone to runtime errors in client-side applications.
\\nGraphQL ships with built-in type safety in its schema. Each field in the schema is typed, ensuring that clients know the exact structure and type of the data they will receive. This reduces runtime errors in client-side applications.
\\nAPI technologies like REST would require multiple HTTP calls to access data from multiple sources.
\\nOn the other hand, GraphQL simplifies aggregating data from multiple sources or APIs and then resolving the data to the client in a single API call.
\\nBelow is a detailed comparison table summarizing their main differences:
\\nFeature | \\nREST | \\nGraphQL | \\n
---|---|---|
Data fetching | \\nMay over-fetch or under-fetch data due to fixed endpoints | \\nFetches only the requested fields, reducing data transfer overhead | \\n
API schema | \\nNo strict schema enforcement by default | \\nUses Schema Definition Language (SDL) to enforce a strongly typed schema | \\n
Number of endpoints | \\nMultiple endpoints for different resources | \\nSingle endpoint handling all queries and mutations | \\n
Caching | \\nBuilt-in support with HTTP caching (CDN, browser, and proxy caching) | \\nMore complex; requires custom caching strategies | \\n
Error handling | \\nUses HTTP status codes (e.g., 404, 500) for clear error responses | \\nReturns 200 OK even for errors; requires parsing the error object | \\n
Real-time updates | \\nRequires WebSockets, polling, or SSE for real-time communication | \\nSupports real-time subscriptions natively | \\n
Complex queries | \\nClients must make multiple requests to retrieve related data | \\nClients can request multiple related entities in a single query | \\n
Security | \\nEasier to enforce role-based access and rate-limiting | \\nRequires additional security measures, such as query complexity limits | \\n
Industry adoption | \\nStill the dominant API standard in enterprise, finance, and healthcare | \\nGaining popularity in startups, ecommerce, and social media apps | \\n
Each has its advantages and disadvantages, so the choice ultimately depends on your project’s needs. Do you want your project to be built based on performance, security, or flexibility? Once you’ve answered that, you can choose the one that best suits your project.
\\nREST provides you with a scalable API architecture that powers millions of applications worldwide. It excels in simplicity, caching, and security, which makes it the go-to choice for public APIs, financial services, and enterprise applications.
\\nChoose REST when:
\\nGraphQL on the other hand, would give you full control over data fetching. It’s perfect for flexible, frontend-driven applications that require real-time updates and efficient API queries.
\\nChoose GraphQL when:
\\nBoth GraphQL and REST offer distinct advantages. REST is used for most applications due to its simplicity and dependability, but GraphQL is best suited for modern, frontend-driven apps that require flexibility and efficiency. Knowing all of this will help you choose the right architecture for your project.
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\noverflow: hidden
vs. scrollbar-width: none
vs. -webkit-scrollbar
\\n overflow: hidden
vs. custom scrollable divs\\n useState
.\\n This guide will show you how to hide the scrollbar in popular web browsers by making use of modern CSS techniques.
\\nThe browser’s scrollbar allows users to scroll up and down on the page without taking their hands off the keyboard or trackpad. However, to achieve a more streamlined appearance, certain websites alter, customize, or completely hide the scrollbar, either for the entire webpage or specific elements.
\\nTo get the most out of this article, you should have a basic understanding of HTML and CSS.
\\nWhile it’s generally recommended to avoid altering or overriding default browser styles for accessibility reasons, there can be compelling justifications for hiding scrollbars.
\\nScrollbars appear automatically when web content exceeds the available space within the browser window. User agent styles, which are responsible for the default browser styling, manage this behavior.
\\nA scrollbar provides a visual cue for scrolling using a mouse or keyboard. However, it’s unnecessary in specific layout patterns: particularly those that don’t require scrolling interactions, such as a slideshow, news tickers, image galleries, etc. In such patterns, hiding the scrollbar can create a smoother interaction and eliminate distractions from the overall feature.
\\nBy hiding scrollbars, you can reclaim the space they occupy, adding to your screen’s real estate. This not only streamlines the UI but also allows for a cleaner and more spacious design.
\\nAnother common motivation for hiding scrollbars is to enhance the mobile viewing experience. On mobile devices, especially smartphones, users typically expect vertical scrolling, with no need for horizontal movement, as screens are usually tall and narrow, and content flows from top to bottom. Keeping the horizontal scrollbar hidden creates a more natural feel and reduces attention to the technical aspects of browsing.
\\nIf you’re wondering how to hide or remove these scrollbars correctly, this tutorial covers everything you should know to accomplish that.
\\nEditor’s note: This article was last updated by Alexander Godwin in March 2025 to include updated information on cross-browser compatibility, advanced use cases for custom scrollable divs, and additional interactive examples.
\\nThere are two different methods to hide the scrollbar for a webpage or a specific element within it. The first method involves setting the overflow property to hidden
, which effectively hides the scrollbar:
.no-horizontal-scrollbar {\\n /* Keeps the horizontal scrollbar hidden */\\n overflow-x: hidden;\\n}\\n\\n.no-vertical-scrollbar {\\n /* Keeps the vertical scrollbar hidden */\\n overflow-y: hidden;\\n}\\n\\n.no-scrollbars {\\n /* Keeps both the horizontal and vertical \\n scrollbars hidden */\\n overflow: hidden;\\n}\\n\\n
However, this method also takes away the ability to scroll and greatly affects basic accessibility. This is where the scrollbar-specific CSS pseudo-selectors come into play, which we will briefly discuss in the next few sections.
\\nApart from the overflow
CSS property, you primarily need just two more CSS features to manage the appearance of scrollbars:
-webkit-scrollbar
— To target scrollbars in all older versions of WebKit-based browsersscrollbar-width
— To target scrollbars in Modern browsers. This property is part of the new scrollbar properties and is currently supported in newer versions of most browsers.Along with these two CSS features, we will employ additional presentational CSS properties to enhance the appearance of the upcoming examples in later sections.
\\nEach browser-rendering engine takes a unique approach to managing scrollbar visibility, leading to the use of various vendor-specific CSS pseudo-selectors and properties. Let’s briefly examine these and their usage.
\\nYou can use the ::-webkit-scrollbar
pseudo-selector to hide the scrollbar in older Chrome, Edge, Opera, Safari, and other WebKit-based browsers. This is not currently the standard way; it’s a vendor-specific selector supported by a limited category of browsers.
The -webkit-scrollbar
pseudo-selector provides a wide range of options for customizing a scrollbar. You can adjust the appearance of the up and down arrows, modify the scrollbar thumb’s and track’s color, change the background, and more:
scrollable-content {\\n height: 150px;\\n}\\n\\n.scrollable-content::-webkit-scrollbar {\\n display: none;\\n}\\n\\n
In this example, we’ll focus on how to hide the scrollbar without affecting the ability to scroll:
\\nSee the Pen
\\nHiding the vertical scrollbar by Rahul (@_rahul)
\\non CodePen.
As you can see in the above demo, the scrollbar is hidden, but the page remains scrollable using both the mouse and keyboard. Note that this demo covers the hidden scrollbars in Chrome, Edge, and WebKit-based browsers only.
\\n\\nBrowsers developed by Microsoft also support the ::-webkit-scrollbar
pseudo-selector for adjusting scrollbar visibility and other appearance properties.
If you prefer not to use the -webkit-scrollbar
pseudo-selector, you can use the -ms-overflow-style
property to control the scrollbar visibility. Note that this property is specific to Microsoft Edge and Internet Explorer, and won’t function in other browsers:
.scrollable-content {\\n -ms-overflow-style: none;\\n} \\n\\n
For Modern versions of browsers like Firefox, Google Chrome, and Microsoft Edge, you have the option to use the scrollbar-width
property to control the scrollbar visibility. This CSS property is the standard method for controlling the visibility and width of scrollbars with CSS:
.scrollable-content {\\n scrollbar-width: none;\\n}\\n\\n
Here’s an implementation using all the pseudo-selectors and properties discussed above, which makes this example functional on all modern web browsers that implement WebKit, Edge, or Gecko rendering engines:
\\nSee the Pen
\\nUntitled by Rahul (@_rahul)
\\non CodePen.
While this approach might appear sophisticated to developers, it doesn’t offer any visual cues or indications to users regarding the presence of additional content below the current view. This lack of clarity can potentially result in a significant accessibility issue.
\\noverflow: hidden
vs. scrollbar-width: none
vs. -webkit-scrollbar
Method | \\nBrowser support | \\nScrolling behavior | \\nVisual impact | \\nBest for | \\nDrawbacks | \\n
---|---|---|---|---|---|
overflow: hidden | \\nAll browsers | \\nDisables scrolling completely | \\nHides content overflow | \\nContent that should never scroll | \\nContent becomes inaccessible | \\n
scrollbar-width: none | \\nModern browsers | \\nMaintains scrolling | \\nHides only scrollbar | \\nModern browser UIs | \\nRequires additional CSS for other browsers | \\n
-ms-overflow-style: none | \\nIE/Edge | \\nMaintains scrolling | \\nHides only scrollbar | \\nLegacy Microsoft browsers | \\nLimited to Microsoft browsers | \\n
::-webkit-scrollbar | \\nOld Chrome, Safari, Opera browsers | \\nMaintains scrolling | \\nHides only scrollbar | \\nWebKit/Blink browsers | \\nRequires vendor prefix | \\n
overflow: hidden
vs. custom scrollable divsoverflow:hidden | \\ncustom scrollable divs | \\n
---|---|
Simple to implement | \\nFull content remains accessible | \\n
Good for truncating content | \\nBetter user experience for long content | \\n
Prevents layout shifts | \\nCan be styled to match design | \\n
Better performance since the browser doesn’t need to handle scrolling | \\nMaintains content context | \\n
When building modern web interfaces, basic scrollable divs are often insufficient. Let’s explore some advanced patterns that enhance user experience and performance.
\\nPerfect for content feeds and long lists, infinite scrolling loads content as users scroll:
\\nconst observeScroll = (container) => {\\n const observer = new IntersectionObserver(entries => {\\n if (entries[0].isIntersecting) {\\n loadMoreContent();\\n }\\n }, { root: container, threshold: 0.1 });\\n\\n observer.observe(container.querySelector(\'.scroll-trigger\'));\\n};\\n\\n
Essential for handling large datasets efficiently by rendering only visible items:
\\nclass VirtualScroller {\\n constructor(container, items) {\\n this.visibleItems = Math.ceil(container.clientHeight / this.rowHeight);\\n this.totalHeight = items.length * this.rowHeight;\\n\\n container.addEventListener(\'scroll\', () => {\\n const startIndex = Math.floor(container.scrollTop / this.rowHeight);\\n this.renderVisibleItems(startIndex);\\n });\\n }\\n}\\n\\n
Useful for comparing content side by side, like code diffs:
\\nconst syncScroll = (containers) => {\\n containers.forEach(container => {\\n container.onscroll = (e) => {\\n containers\\n .filter(c => c !== e.target)\\n .forEach(other => other.scrollTop = e.target.scrollTop);\\n };\\n });\\n};\\n\\n
Create engaging animations as users scroll through content:
\\nconst observer = new IntersectionObserver(entries => {\\n entries.forEach(entry => {\\n if (entry.isIntersecting) {\\n entry.target.style.opacity = entry.intersectionRatio;\\n }\\n });\\n}, { threshold: Array.from({length: 100}, (_, i) => i / 100) });\\n\\n
Performance Tips
\\nrequestAnimationFrame
for smooth animationswill-change
for better performancecontain
propertyThese patterns should only enhance, and not hinder, the user experience. Always test performance on various devices and ensure accessibility isn’t compromised.
\\nEach pattern serves specific use cases. Choose based on your needs while considering performance and user experience.
\\n\\nTailwindCSS can also be used to hide the scrollbar. To achieve the desired effect, the following CSS code is added to the stylesheet:
\\n@layer utilities { \\n .no-scrollbar::-webkit-scrollbar { \\n display: none; \\n } \\n\\n .no-scrollbar { \\n -ms-overflow-x: hidden; \\n scrollbar-width: none; \\n } \\n}\\n\\n
The CodePen demo below shows how to hide the scrollbar using TailwindCSS:
\\nSee the Pen
\\nHiding the vertical scrollbar by oviecodes (@oviecodes)
\\non CodePen.
Whenever you’re using Tailwind, you can choose to use its utilities. When using plain CSS, you can roll out your own custom CSS.
\\nIf your website has a specific section with scrollable content, maintaining a visible scrollbar is advantageous for usability and accessibility. However, as discussed earlier, a constantly visible scrollbar can compromise the aesthetics of your site’s UI in certain cases.
\\nIn such situations, you can make the scrollbar visible only upon hovering. This implies that the scrollbar remains hidden if the target section is not in use.
\\nTake the following implementation as an example, featuring a vertically scrollable element. The markup part is straightforward and doesn’t directly affect the presentation or functionality of the scrollable element:
\\n<div class=\\"scrollable-content\\">\\n ...\\n <!-- Place some content here. --\x3e\\n</div>\\n\\n
In the CSS part, constraining the height of the .scrollable-content
div and hiding its overflow establish the foundation for making it truly scrollable. While this may initially result in an unpolished appearance, we can enhance its visual appeal by incorporating additional CSS properties.
I’m focusing on the essential CSS properties in the code below:
\\n.scrollable-content {\\n max-width: 450px;\\n max-height: 375px;\\n overflow-y: hidden;\\n /* More presentational CSS */\\n}\\n\\n
Now, changing the vertical overflow to scroll
upon hover will ensure that the scrollbar appears only when the user intends to use the .scrollable-content
section. To provide a seamless user experience, we should extend this functionality beyond just hovering.
By incorporating the :active
and :focus
pseudo-classes, users can utilize the mouse wheel to scroll up and down the scrollable element:
.scrollable-content:hover,\\n.scrollable-content:active,\\n.scrollable-content:focus {\\n overflow-y: scroll;\\n}\\n\\n
The CodePen demo shows how to conditionally hide the scrollbar:
\\nSee the Pen
\\nScrollable Elements w/ CSS by Rahul (@_rahul)
\\non CodePen.
As evident in the example above, hovering triggers the appearance of the vertical scrollbar but also introduces a slight text and layout shift within the scrollable element. This occurs because the browser adjusts the scrollable element to accommodate the vertical scrollbar. This adjustment may disrupt the overall visual flow.
\\nTo eliminate this shift and achieve a smoother scrollbar appearance, you can integrate the scrollbar-gutter
CSS property, which essentially prepares the element for potential layout adjustments caused by scrollbars.
By setting the value to stable
, the scrollbar-gutter
property will pre-adjust the element only from the edge where the scrollbar is intended to be added. Setting it to stable both-edges
will pre-adjust it from both edges to maintain a proportional appearance:
.scrollable-content {\\n ...\\n scrollbar-gutter: stable both-edges;\\n}\\n\\n
For additional enhancements, you can go the extra mile and stylize the scrollbar using the scrollbar-specific pseudo-elements. Here’s a demo showcasing a scrollable element with a decorated scrollbar without any layout shifts:
\\nSee the Pen
\\nSmart Scrollable Elements Using CSS by Rahul (@_rahul)
\\non CodePen.
Here’s a quick demonstration showcasing both scrollbars hinting and toggling to maintain visibility. The demo implements the previously covered code examples and uses a bit of JavaScript for toggling between two different scrolling functionalities:
\\nSee the Pen
\\nSmart and Accessible Scrollable Elements w/ CSS by Rahul (@_rahul)
\\non CodePen.
The CodePen demo below show how to toggle the scrollbar on a div by pressing a combination of keys on the keyboard, which can help to improve accessibility.
\\nSee the Pen
\\nUntitled by oviecodes (@oviecodes)
\\non CodePen.
Note that hiding or showing scrollbars with CSS won’t significantly impact page load or rendering times. Using CSS to style scrollbars might require a bit more CSS, but it won’t noticeably affect load or rendering times. The same applies to hiding scrollbars with CSS.
\\nIf you’re using a JavaScript library to manage the scrollbar display, I recommend doing that with CSS to reduce the overall page size and load time.
\\nuseState
.The codepen demo below shows how React’s useState
can be used to toggle the state of a scrollbar. This technique can apply to modals and other custom scrollable divs:
See the Pen
\\nReact Hide Scrollbar by oviecodes (@oviecodes)
\\non CodePen.
Hidden scrollbars have become a common aesthetic choice in modern web interfaces. However, this seemingly simple design decision can significantly impact web accessibility. For more information, check out this guide to styling CSS scrollbars. Let’s explore why scrollbars matter and how to implement them responsibly.
\\nScrollbars serve as crucial visual indicators that provide users with spatial awareness of content length and their current position. For users relying on screen readers, scrollbars offer essential context about navigable content and help maintain orientation.
\\nHiding scrollbars can potentially create barriers to:
\\nAlong with dynamically hiding the scrollbar as discussed above, the techniques discussed below also help to improve accessibility.
\\nHere’s a basic implementation that balances aesthetics with accessibility: using a thin and minimally styled scrollbar:
\\n.scrollable-container {\\n /* Make container scrollable */\\n overflow-y: auto;\\n max-height: 500px;\\n\\n /* Enhance keyboard accessibility */\\n outline: none;\\n\\n /* Style scrollbar for modern browsers */\\n scrollbar-width: thin;\\n scrollbar-color: #888 #f1f1f1;\\n}\\n\\n/* Webkit browsers */\\n.scrollable-container::-webkit-scrollbar {\\n width: 6px;\\n}\\n\\n.scrollable-container::-webkit-scrollbar-thumb {\\n background-color: #888;\\n border-radius: 3px;\\n}\\n\\n
Add ARIA attributes for screen readers:
\\n<div \\n class=\\"scrollable-container\\"\\n role=\\"region\\"\\n aria-label=\\"Scrollable content\\"\\n tabindex=\\"0\\"\\n>\\n <!-- Content here --\x3e\\n</div>\\n\\n
The CodePen below shows how aria-hidden
and role=\\"region\\"
can be used to ensure hidden scrollbars remain accessible to screen readers:
See the Pen
\\nAria-hidden & role region by oviecodes (@oviecodes)
\\non CodePen.
role=\\"region\\"
— Added to the scrollable container to indicate it is a distinct section of content that users might want to navigate to directlyaria-label=\\"Scrollable content\\"
— Provides a descriptive name for the region that screen readers can announcearia-hidden={!showScrollbar}
— Tells screen readers whether the scrollable content is currently hidden. This matches the visual state of the scrollbarOn the button:
\\naria-controls=\\"scrollable-content\\"
— Associates the button with the content it controlsaria-expanded={showScrollbar}
— Indicates whether the controlled content is expanded (visible) or collapsed (hidden)To address several WCAG guidelines and keyboard accessibility requirements:
\\ntabIndex={0}
to make the scrollable region focusablerole=\\"region\\"
identifies the scrollable areaaria-label
provides contextaria-control
s and aria-expanded
maintain relationshipsHere’s a CodePen demo that shows the implementation:
\\nSee the Pen
\\nWCAG guidelines by oviecodes (@oviecodes)
\\non CodePen.
When implementing scrollbar visibility toggling, developers often overlook the performance implications. The sudden appearance or disappearance of scrollbars can cause unexpected layout shifts, leading to poor user experience and affecting your site’s Core Web Vitals scores.
\\nScrollbars take up space in the viewport. In most browsers, showing or hiding them changes the available content width, which can cause surrounding elements to shift. This creates what Google calls Cumulative Layout Shift (CLS), a key metric for measuring user experience.
\\nHere’s how to calculate and compensate for scrollbar width:
\\nconst getScrollbarWidth = () => {\\n const outer = document.createElement(\'div\');\\n outer.style.visibility = \'hidden\';\\n outer.style.overflow = \'scroll\';\\n document.body.appendChild(outer);\\n\\n const inner = document.createElement(\'div\');\\n outer.appendChild(inner);\\n\\n const scrollbarWidth = outer.offsetWidth - inner.offsetWidth;\\n outer.parentNode.removeChild(outer);\\n\\n return scrollbarWidth;\\n};\\n\\n
To create a smooth scrollbar toggle experience, we need to compensate for the scrollbar width. Here’s a solution:
\\n:root {\\n --scrollbar-width: 0px;\\n}\\n\\n.scroll-wrapper {\\n position: relative;\\n width: 300px;\\n padding-right: var(--scrollbar-width);\\n}\\n\\n.content {\\n height: 200px;\\n overflow-y: auto;\\n transition: margin-right 0.2s ease;\\n}\\n\\n
The wrapper maintains a stable width while the content area adjusts smoothly. Using CSS Custom Properties allows for dynamic updates:
\\n// Calculate once on load\\nconst scrollbarWidth = getScrollbarWidth();\\ndocument.documentElement.style.setProperty(\'--scrollbar-width\', `${scrollbarWidth}px`);\\n\\n
A good design isn’t just about aesthetics, it’s about creating experiences that work for everyone.
\\nYou now have a good grasp of hiding scrollbars with CSS while maintaining smooth scrolling and accessibility. While hiding scrollbars may be suitable for certain UI and aesthetic considerations, it’s essential to remember that keeping scrollbars visible in scrollable sections helps users easily locate and navigate content, thereby enhancing accessibility.
\\nI hope this article has been helpful to you. See you in the next one!
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nA React UI library or React component library is a software system that comes with a collection of pre-built and reusable components — tables, charts, modals, navbars, cards, buttons, and maps that are ready to use in React applications. These components are out-of-the-box, and beautifully and uniquely styled.
\\nThese built-in React components and ready-to-use design elements reduce the need to build UI components from scratch. They can play a major role in improving your development experience, and reducing time to production.
\\nThere are countless React UI kits and libraries available today. Your choice of a particular React component library largely depends on your project and design requirements. However, you need to also consider the library’s popularity, pricing, support, community, maintenance, and licensing requirements.
\\nIn this guide, we’ll highlight 16 of the most useful kits and libraries and show how to use them in your next React app. A few of them are popular, and some are more obscure. All of them can help address the unique needs of your next React project.
\\nEditor’s note: This article was last updated by Joseph Mawa in March 2025 to add information on Headless UI for React and Hero UI, as well as provide pros/cons for each library.
\\nBuilding UI components from scratch can be tedious and sometimes futile. This is why component libraries exist; they provide ready-to-use design elements, thereby allowing developers to focus on building the UI without building everything from scratch.
\\nWhile building from scratch gives you complete control, it comes with a cost: maintainability.
\\nUsing UI library makes more sense in most cases and it brings with it the following benefits:
\\nBy providing beautiful components or design elements, UI libraries ensure that developers focus on implementing the functionality of an app, thereby speeding up the development process.
\\nFaster development time doesn’t mean developers should compromise on the look of their application. This is why UI libraries come with beautifully designed, ready-to-use components that act as the building blocks of an application.
\\nBecause the web is accessed by different people with different devices and needs, it is a huge task to build components from scratch that address your users’ accessibility needs and have the correct styles on multiple devices. UI libraries take care of these and also handle the support of older browsers.
\\nIn some cases — usually involving the use of a relatively new CSS property or browser tool — developing CSS that works with all browsers can be tricky. This can negatively affect your user’s experience. UI libraries are an effective solution for this because they have cross-browser compatibility; your application will work on all modern browsers.
\\nIn this section, we will compare the top React component libraries by summarizing their functionalities and highlighting their GitHub stars, weekly npm downloads, and newness. This comparison will help you quickly pick those that meet your project requirements.
\\nYou need to be aware that GitHub stars are similar to social media likes. They in no way reflect the quality of the software. Similarly, the weekly npm downloads are far from accurate because they include downloads from automated build servers and bots.
\\nOn the other hand, you can quickly skim through the functionality of each library, pricing, and licensing requirements so that you can identify those libraries that meet your project requirements:
\\nReact UI library | \\nGitHub stars | \\nLicensing | \\nPricing | \\nFunctions | \\nWeekly npm downloads | \\nNewness | \\n
---|---|---|---|---|---|---|
React Bootstrap | \\n22.5K | \\nMIT | \\nFree | \\njQuery-free, ready-to-use React components styled with Bootstrap | \\n1,070,903 | \\n11 years | \\n
Core UI | \\n787 | \\nMIT and Commercial Licenses | \\nFree and Paid versions | \\njQuery-free, customizable, easy to learn React.js UI components, and React.js Admin Templates. | \\n146,306 | \\n7 years | \\n
PrimeReact | \\n7.4K | \\nMIT | \\nFree | \\nRich set of open source UI components for React | \\n151,116 | \\n8 years | \\n
Grommet | \\n8.4K | \\nApache License 2.0 | \\nFree | \\nAccessibility, modularity, responsiveness, and theming | \\n33,808 | \\n10 years | \\n
Onsen UI | \\n8.8K | \\nApache License 2.0 | \\nFree | \\nNative-feeling progressive web apps (PWAs) and hybrid apps | \\n20,392 | \\n9 years | \\n
MUI | \\n94.8K | \\nMIT and Commercial Licenses | \\nFree and Paid versions | \\nReady-to-use foundational React components styled with Google’s Material Design | \\n4,971,142 | \\n11 years | \\n
Chakra UI | \\n38.5K | \\nMIT | \\nFree | \\nSimple, modular, and accessible UI Components | \\n668,703 | \\n5 years | \\n
Ant Design | \\n93.6K | \\nMIT | \\nFree | \\nA set of high-quality React components for building enterprise-class UI designed for web applications | \\n1,705,267 | \\n10 years | \\n
Semantic UI React | \\n13.3K | \\nMIT | \\nFree | \\njQuery-free, declarative API, beautifully styled React components for enterprise-class UI | \\n274,350 | \\n10 years | \\n
Blueprint UI | \\n20.9K | \\nApache License 2.0 | \\nFree | \\nOptimized for building complex, data-dense interfaces for desktop applications | \\n167,600 | \\n9 years | \\n
Visx | \\n19.7K | \\nMIT | \\nFree | \\nConsists of low-level visualization primitives for React | \\n664,000 | \\n7 years | \\n
Fluent UI | \\n18.8K | \\nMIT | \\nFree | \\nRobust React-based frontend framework/components for building web experiences | \\n115,139 | \\n5 years | \\n
Evergreen | \\n12.4K | \\nMIT | \\nFree | \\nWorks out of the box, offers server-side rendering, flexible, composable, enterprise-grade | \\n15,935 | \\n7 years | \\n
Mantine | \\n27.7K | \\nMIT | \\nFree | \\nFree and open source, usable in any project, customizable, responsive, and adaptive | \\n243,303 | \\n5 years | \\n
Headless UI for React | \\n26.8K | \\nMIT | \\nFree | \\nFree and open source. It provides unstyled and fully accessible React components you can style using Tailwind Css | \\n2,372,740 | \\n4 years | \\n
Hero UI | \\n23.1K | \\nMIT | \\nFree | \\nBuilt on top of Tailwind CSS and React Aria. You can use it to build accessible and aesthetically pleasing React applications | \\n4,223 | \\n4 years | \\n
React Bootstrap rebuilds Bootstrap — the most popular frontend framework for React — removing the unnecessary jQuery dependency.
\\nAlthough the jQuery dependency is removed, React Bootstrap embraces its Bootstrap core and works with the entire Bootstrap stylesheet. Consequently, it is compatible with many Bootstrap themes.
\\nAs one of the oldest React frameworks, React Bootstrap has evolved and matured linearly with React. Additionally, each component is implemented with accessibility in mind, so it offers a set of accessible-by-default design elements.
\\n\\nRun the following code to install React Bootstrap:
\\nsh\\nnpm install react-bootstrap bootstrap\\n\\n
You can easily import and use components like this:
\\njs\\nimport Button from \'react-bootstrap/Button\';\\n\\n// or less ideally\\nimport { Button } from \'react-bootstrap\';\\n\\n<Stack direction=\\"horizontal\\" gap={2}>\\n <Button as=\\"a\\" variant=\\"primary\\">\\n Button as link\\n </Button>\\n <Button as=\\"a\\" variant=\\"success\\">\\n Button as link\\n </Button>\\n</Stack>\\n\\n
Core UI is one of the most powerful React UI component libraries. It provides a robust collection of simple, customizable, easy-to-use React UI components and React Admin Templates. Consequently, Core UI provides all the design elements needed to build modern, beautiful, and responsive React applications, thereby cutting development time significantly.
\\nIn addition to speeding up your development time, Core UI provides beautifully handcrafted design elements that are Bootstrap-compatible. These design elements are true React components built from scratch with Bootstrap but without the jQuery dependency.
\\nFurthermore, Core UI provides both mobile and cross-browser compatibility. It also supports most of the other popular frameworks like Angular and Vue
\\nTo use Core UI, install it by running the following command:
\\nsh\\nnpm install @coreui/react\\n\\n
Then you can import and use any of the built-in React components like so:
\\njs\\nimport React from \'react\'\\nimport { CButton } from \'@coreui/react\'\\n\\nexport const ButtonExample = () => {\\n return (\\n <>\\n <CButton color=\\"primary\\">Primary</CButton>\\n </>\\n )\\n}\\n\\n
PrimeReact, built by PrimeTek Informatics, is one of the most extraordinary React UI kits that accelerates frontend design and development, featuring a collection of more than 70 components to choose from.
\\nIn addition to a wide variety of components, PrimeReact features custom themes, premium application templates, a11y, and responsive and touch-enabled UI components to deliver an excellent UI experience on any device.
\\nFor more details, check out PrimeReact on GitHub.
\\nThe kit is easy to install and use:
\\nsh\\nnpm i primereact --save\\n\\n
For icons, you can download the primeicons
library:
sh\\nnpm i primeicons --save\\n\\n
After installation, you can import and use a component like this:
\\njs\\nimport { Button } from \\"primereact/button\\";\\n\\nfunction PrimeButtonEx() {\\n return (\\n <div>\\n <Button>Button</Button>\\n </div>\\n );\\n}\\n\\n
Part design, part framework, Grommet is a UI library based in React. It features a great set of components that make it easy to get started. The library also provides powerful theming tools that allow you to tailor the component library to align with your desired layout, color, and type.
\\nThe Grommet Design Kit is a drag-and-drop tool that makes designing your layout and components a breeze. It features sticker sheets, app templates, and plenty of icons:
\\nTo set up Grommet, run the following command in your React app:
\\nsh\\nnpm i grommet\\n\\n
To use a component such as Button
, import it from the \\"grommet\\"
package:
js\\nimport { Grommet, Button } from \\"grommet\\"\\n\\nfunction GrommetButtonEx() {\\n return (\\n <Grommet className=\\"App\\">\\n <Button label=\\"Button\\" />\\n </Grommet>\\n );\\n}\\n\\n
If you want your web app to feel native, Onsen UI is the library for you. Onsen UI is designed to enrich the user experience with a mobile-like feel. It’s packed with features that provide the UI experience of native iOS and Android devices.
\\nOnsen UI’s elements and components are natively designed and perfect for developing hybrid apps and web apps. The library enables you to simulate page transitions, animations, ripple effects, and popup models — basically, any effect you would find in native Android and iOS devices:
\\nTo use Onsen in a React app, first install the npm packages:
\\nsh\\nnpm i onsenui react-onsenui --save\\n
onsenui
contains the Onsen UI core instance. react-onsenui
contains the React components:
js\\nimport { Page, Button } from \\"react-onsenui\\";\\n\\nfunction OnsenButtonEx() {\\n return (\\n <Page>\\n <Button> Click Me!!</Button>\\n </Page>\\n );\\n}\\n\\n
Then, import the Onsen CSS:
\\njs\\nimport \\"onsenui/css/onsenui.css\\"\\nimport \\"onsenui/css/onsen-css-components.css\\"\\n
I fondly refer to Onsen UI as the native CSS of the web.
\\nMUI is one of the popular React component libraries. It is based on Google’s Material Design. It is feature-rich with an extensive collection of ready-to-use components.
\\nTo install, run the following command:
\\nsh\\n# with npm\\nnpm install @mui/material @emotion/react @emotion/styled\\n\\n# with yarn\\nyarn add @mui/material @emotion/react @emotion/styled\\n\\n
Next, import the component you want to use from the @mui/material
:
js\\nimport Button from \\"@mui/material/Button\\";\\n\\nfunction MatButtonEx() {\\n return (\\n <div>\\n <Button color=\\"primary\\">Button</Button>\\n </div>\\n );\\n}\\n\\n
MUI also provides beautiful premium themes and templates you can purchase to jumpstart your project. Check out this article for a deeper dive into MUI.
\\nI am so proud of my fellow Nigerian, Segun Adebayo, for developing Chakra UI. It has a clean and neat UI and is one of the most complete React UI kits I have ever seen. Its APIs are simple but composable, and the accessibility is great.
\\nChakra UI has over 30.8K GitHub stars, and is very extensible and customizable.
\\nInside your React project, run the following command to install Chakra UI:
\\nsh\\nnpm i @chakra-ui/react @emotion/react@^11 @emotion/styled@^11 framer-motion@^4\\n# OR\\nyarn add @chakra-ui/react @emotion/react@^11 @emotion/styled@^11 framer-motion@^4\\n\\n
Chakra UI has a ChakraProvider
that we must provide at the root of our application when we want to use Chakra components:
js\\nimport * as React from \\"react\\";\\n\\n// 1. import `ChakraProvider` component\\nimport { ChakraProvider } from \\"@chakra-ui/react\\";\\n\\nfunction App({ Component }) {\\n // 2. Use at the root of your app\\n return (\\n <ChakraProvider>\\n <Component />\\n </ChakraProvider>\\n );\\n}\\n\\n
To use a component — for example, Button
— we have to import it from @chakra-ui/react
:
js\\nimport { Button, ButtonGroup } from \\"@chakra-ui/react\\";\\n\\n
Then we can render Button
like so:
js\\nfunction ChakraUIButtonEx() {\\n return (\\n <div>\\n <Button>Click Me</Button>\\n </div>\\n );\\n}\\n\\n
For more information about Chakra UI and its components, visit the official docs.
\\nAnt Design is regarded as one of the best React UI kits in the world. With over 88K stars on GitHub, it tops the list as one of the most used and downloaded React UI kits.
\\nAnt Design incorporates and promotes global design patterns and offers features like powerful theme customization, high-quality React components, and internationalization support.
\\nInstall Ant Design like so:
\\nsh\\n# npm\\nnpm install antd\\n# yarn\\nyarn add antd\\n\\n
We can import the style sheets manually:
\\njs\\nimport \'antd/dist/antd.css\';\\n\\n
We can import any component we want to use from antd
. For example, to use Button
, we would do this:
js\\nimport { Button } from \\"antd\\";\\n\\nfunction AntdEx() {\\n return <Button type=\\"primary\\">Primary Button</Button>;\\n}\\n\\n
Visit this page to see all the components in Ant Design. Ant Design also has a spin-off for Angular and a spin-off for Vue.js.
\\nSemantic UI React is the official Semantic UI integration for React. It is a complete React UI kit that is built on top of the Semantic UI CSS framework.
\\nThis Semantic UI React boasts over 100 components and offers the following robust features:
\\n<Button type=\\"primary\\" />
, we can write <Button primary />
. A prop can translate to many values. For example, the icon
props can be an icon name
, an <Icon />
instance, or an icon props objectas
props; a Header
may be rendered as an h3
element in the DOMSemantic UI React is easy to install:
\\nsh\\n# yarn\\nyarn add semantic-ui-react semantic-ui-css\\n\\n# npm\\nnpm install semantic-ui-react semantic-ui-css\\n\\n
After installation, we can then import the minified CSS file:
\\njs\\nimport \\"semantic-ui-css/semantic.min.css\\";\\n\\n
Now, let’s see how we can use an inbuilt Semantic UI component. Let’s use the Button
component:
js\\nimport React from \\"react\\";\\nimport { Button } from \\"semantic-ui-react\\";\\n\\nconst ButtonExampleButton = () => <Button>Click Here</Button>;\\n\\nexport default ButtonExampleButton;\\n\\n
To see all components in Semantic UI React, visit the official docs.
\\nBlueprint UI is a React-based UI kit for the web with over 20K stars on GitHub. It is optimized for building complex interfaces for desktop applications.
\\nInstalling Blueprint UI is very simple:
\\nsh\\nyarn add @blueprintjs/core react react-dom\\n\\n
@blueprintjs/core
is the core of the Blueprint UI kit. It contains over 40 components we can use. The react-dom
and react
packages are required for Blueprint UI to work. Additional components can be obtained from:
@blueprintjs/icons
@blueprintjs/select
@blueprintjs/datetime
@blueprintjs/table
@blueprintjs/timezone
To use a component from Blueprint UI, we’ll have to import it from @blueprintjs/core
. For example, to use the Button
component, we will have to import it from @blueprintjs/core
:
js\\nimport { Button } from \\"@blueprintjs/core\\";\\n\\n
Then we can render the Button
like so:
js\\nfunction BlueprintUIButtonEx() {\\n return (\\n <div>\\n <Button intent=\\"success\\" text=\\"button content\\">\\n Click Me\\n </Button>\\n </div>\\n );\\n}\\n\\n
Visx stands for Visual Components and is a collection of reusable, low-level visualization components developed by Airbnb. It consists of several standalone packages for building flexible visual interfaces with React.
\\n\\nVisx is open source and designed to make creating complex and interactive data visualizations easier using React components. Visx provides a set of modular, low-level building blocks for creating custom visualizations, allowing developers to have fine-grained control over the appearance and behavior of their UI.
\\nYou can install Visx with npm or yarn:
\\nsh\\n# npm\\nnpm install @visx/shape @visx/scale @visx/axis @visx/group @visx/text\\n\\n# yarn\\nyarn add @visx/shape @visx/scale @visx/axis @visx/group @visx/text\\n\\n
Fluent UI, formerly Office UI Fabric, is a set of open source, cross-platform design and user interface (UI) components and libraries developed by Microsoft. It is designed to help developers create consistent, visually appealing, and accessible user interfaces for their web and mobile applications. Fluent UI provides a comprehensive set of UI components that follow the Fluent Design System principles, such as buttons, forms, menus, and more.
\\nTo install Fluent UI, run the following code:
\\nsh\\n# with npm\\nnpm install @fluentui/react\\n\\n# with yarn\\nyarn add @fluentui/react\\n\\n
Evergreen is a design system and set of open source, React-based UI components created by Segment, a customer data platform company. Evergreen UI is designed to help developers build modern and elegant user interfaces for web applications. It provides a collection of reusable, customizable components that follow a minimalist design philosophy.
\\nEvergreen can be installed by running the code below:
\\nsh\\n# yarn\\nyarn add evergreen-ui\\n# npm\\nnpm install --save evergreen-ui\\n\\n
You can import and use components as seen below:
\\njs\\nimport { Button } from \\"evergreen-ui\\";\\n\\nfunction App() {\\n return (\\n <>\\n <Button marginLeft={10} marginRight={10}>\\n Default\\n </Button>\\n <Button marginRight={10} appearance=\\"primary\\">\\n Primary\\n </Button>\\n <Button marginRight={10} appearance=\\"minimal\\">\\n Minimal\\n </Button>\\n </>\\n );\\n}\\n\\n
Mantine is an open source React component library that provides a wide range of high-quality, customizable, and accessible UI components for building modern web applications. Mantine is designed to simplify building user interfaces in React by offering a comprehensive set of React components and utilities.
\\nInstall Mantine by running any of the code below:
\\nsh\\n# npm\\nnpm install @mantine/core @mantine/hooks\\n# yarn\\nyarn add @mantine/core @mantine/hooks\\n\\n
You can import and use components from Mantine like so:
\\njs\\nimport { Button } from \\"@mantine/core\\";\\n\\nfunction Demo() {\\n return <Button fullWidth>Full width button</Button>;\\n}\\n\\n
Headless UI is one of the React component libraries. It provides tons of unstyled and fully accessible React components. It is developed and maintained by Tailwind Labs, the developers of Tailwind CSS. Therefore, you can easily integrate it with Tailwind CSS, one of the leading CSS frameworks.
\\nGetting started with the latest version of Headless UI for React is simple and straightforward. You need to install it from the npm package registry like so:
\\nsh\\nnpm install @headlessui/react@latest\\n\\n
After successfully installing it, you can import and use its built-in components as in the example below. If you’re using it with Tailwind CSS, be sure to set it up as well.
\\njs\\nimport { Button } from \'@headlessui/react\'\\n\\nexport default function Example() {\\n return (\\n <Button className=\\"rounded-md bg-gray-600 py-1.5 px-3\\">\\n Save changes\\n </Button>\\n )\\n}\\n\\n
Hero UI, which was previously known as Next UI, is one of the fully-featured React component libraries. It was built on top of Tailwind CSS and React Aria. Under the hood, it uses Framer Motion for animation.
\\nHero UI is one of the React component libraries to look out for if you want to build accessible and aesthetically pleasing React applications. It comes with a built-in theme functionality that you can easily customize to meet your design requirements.
\\nGetting started with Hero UI is fairly straightforward. You can install the Hero UI command line tool and use it to bootstrap a React project:
\\nsh\\n# Install the command line tool\\nnpm install -g heroui-cli\\n\\n# Create a project using the command line tool\\nheroui init hero-ui-app\\n\\n
Hero UI has tons of React component libraries distributed as separate npm packages. You install each React component you want to use separately:
\\nsh\\nnpm install @heroui/button\\n\\n
After installation, you can import and use a component like so:
\\njs\\nimport { Button } from \\"@heroui/react\\";\\n\\nexport default function App() {\\n return <Button color=\\"primary\\">Button</Button>;\\n}\\n\\n
If you’re interested in other React UI libraries, check out the following:
\\nIn this guide, we reviewed a comprehensive list of React UI kits — everything from innovative newcomers to popular stalwarts. We also shared other React UI kits that are not quite popular but still pack a punch.
\\nNow you should have the basic, foundational knowledge you need to select the right UI kit for your next React project.
\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nA React UI library or React component library is a software system that comes with a collection of pre-built and reusable components — tables, charts, modals, navbars, cards, buttons, and maps that are ready to use in React applications. These components are out-of-the-box, and beautifully and uniquely styled.
\\nThese built-in React components and ready-to-use design elements reduce the need to build UI components from scratch. They can play a major role in improving your development experience, and reducing time to production.
\\nThere are countless React UI kits and libraries available today. Your choice of a particular React component library largely depends on your project and design requirements. However, you need to also consider the library’s popularity, pricing, support, community, maintenance, and licensing requirements.
\\nIn this guide, we’ll highlight 16 of the most useful kits and libraries and show how to use them in your next React app. A few of them are popular, and some are more obscure. All of them can help address the unique needs of your next React project.
\\nEditor’s note: This article was last updated by Joseph Mawa in March 2025 to add information on Headless UI for React and Hero UI, as well as provide pros/cons for each library.
\\nBuilding UI components from scratch can be tedious and sometimes futile. This is why component libraries exist; they provide ready-to-use design elements, thereby allowing developers to focus on building the UI without building everything from scratch.
\\nWhile building from scratch gives you complete control, it comes with a cost: maintainability.
\\nUsing UI library makes more sense in most cases and it brings with it the following benefits:
\\nBy providing beautiful components or design elements, UI libraries ensure that developers focus on implementing the functionality of an app, thereby speeding up the development process.
\\nFaster development time doesn’t mean developers should compromise on the look of their application. This is why UI libraries come with beautifully designed, ready-to-use components that act as the building blocks of an application.
\\nBecause the web is accessed by different people with different devices and needs, it is a huge task to build components from scratch that address your users’ accessibility needs and have the correct styles on multiple devices. UI libraries take care of these and also handle the support of older browsers.
\\nIn some cases — usually involving the use of a relatively new CSS property or browser tool — developing CSS that works with all browsers can be tricky. This can negatively affect your user’s experience. UI libraries are an effective solution for this because they have cross-browser compatibility; your application will work on all modern browsers.
\\nIn this section, we will compare the top React component libraries by summarizing their functionalities and highlighting their GitHub stars, weekly npm downloads, and newness. This comparison will help you quickly pick those that meet your project requirements.
\\nYou need to be aware that GitHub stars are similar to social media likes. They in no way reflect the quality of the software. Similarly, the weekly npm downloads are far from accurate because they include downloads from automated build servers and bots.
\\nOn the other hand, you can quickly skim through the functionality of each library, pricing, and licensing requirements so that you can identify those libraries that meet your project requirements:
\\nReact UI library | \\nGitHub stars | \\nLicensing | \\nPricing | \\nFunctions | \\nWeekly npm downloads | \\nNewness | \\n
---|---|---|---|---|---|---|
React Bootstrap | \\n22.5K | \\nMIT | \\nFree | \\njQuery-free, ready-to-use React components styled with Bootstrap | \\n1,070,903 | \\n11 years | \\n
Core UI | \\n787 | \\nMIT and Commercial Licenses | \\nFree and Paid versions | \\njQuery-free, customizable, easy to learn React.js UI components, and React.js Admin Templates. | \\n146,306 | \\n7 years | \\n
PrimeReact | \\n7.4K | \\nMIT | \\nFree | \\nRich set of open source UI components for React | \\n151,116 | \\n8 years | \\n
Grommet | \\n8.4K | \\nApache License 2.0 | \\nFree | \\nAccessibility, modularity, responsiveness, and theming | \\n33,808 | \\n10 years | \\n
Onsen UI | \\n8.8K | \\nApache License 2.0 | \\nFree | \\nNative-feeling progressive web apps (PWAs) and hybrid apps | \\n20,392 | \\n9 years | \\n
MUI | \\n94.8K | \\nMIT and Commercial Licenses | \\nFree and Paid versions | \\nReady-to-use foundational React components styled with Google’s Material Design | \\n4,971,142 | \\n11 years | \\n
Chakra UI | \\n38.5K | \\nMIT | \\nFree | \\nSimple, modular, and accessible UI Components | \\n668,703 | \\n5 years | \\n
Ant Design | \\n93.6K | \\nMIT | \\nFree | \\nA set of high-quality React components for building enterprise-class UI designed for web applications | \\n1,705,267 | \\n10 years | \\n
Semantic UI React | \\n13.3K | \\nMIT | \\nFree | \\njQuery-free, declarative API, beautifully styled React components for enterprise-class UI | \\n274,350 | \\n10 years | \\n
Blueprint UI | \\n20.9K | \\nApache License 2.0 | \\nFree | \\nOptimized for building complex, data-dense interfaces for desktop applications | \\n167,600 | \\n9 years | \\n
Visx | \\n19.7K | \\nMIT | \\nFree | \\nConsists of low-level visualization primitives for React | \\n664,000 | \\n7 years | \\n
Fluent UI | \\n18.8K | \\nMIT | \\nFree | \\nRobust React-based frontend framework/components for building web experiences | \\n115,139 | \\n5 years | \\n
Evergreen | \\n12.4K | \\nMIT | \\nFree | \\nWorks out of the box, offers server-side rendering, flexible, composable, enterprise-grade | \\n15,935 | \\n7 years | \\n
Mantine | \\n27.7K | \\nMIT | \\nFree | \\nFree and open source, usable in any project, customizable, responsive, and adaptive | \\n243,303 | \\n5 years | \\n
Headless UI for React | \\n26.8K | \\nMIT | \\nFree | \\nFree and open source. It provides unstyled and fully accessible React components you can style using Tailwind Css | \\n2,372,740 | \\n4 years | \\n
Hero UI | \\n23.1K | \\nMIT | \\nFree | \\nBuilt on top of Tailwind CSS and React Aria. You can use it to build accessible and aesthetically pleasing React applications | \\n4,223 | \\n4 years | \\n
React Bootstrap rebuilds Bootstrap — the most popular frontend framework for React — removing the unnecessary jQuery dependency.
\\nAlthough the jQuery dependency is removed, React Bootstrap embraces its Bootstrap core and works with the entire Bootstrap stylesheet. Consequently, it is compatible with many Bootstrap themes.
\\nAs one of the oldest React frameworks, React Bootstrap has evolved and matured linearly with React. Additionally, each component is implemented with accessibility in mind, so it offers a set of accessible-by-default design elements.
\\n\\nRun the following code to install React Bootstrap:
\\nsh\\nnpm install react-bootstrap bootstrap\\n\\n
You can easily import and use components like this:
\\njs\\nimport Button from \'react-bootstrap/Button\';\\n\\n// or less ideally\\nimport { Button } from \'react-bootstrap\';\\n\\n<Stack direction=\\"horizontal\\" gap={2}>\\n <Button as=\\"a\\" variant=\\"primary\\">\\n Button as link\\n </Button>\\n <Button as=\\"a\\" variant=\\"success\\">\\n Button as link\\n </Button>\\n</Stack>\\n\\n
Core UI is one of the most powerful React UI component libraries. It provides a robust collection of simple, customizable, easy-to-use React UI components and React Admin Templates. Consequently, Core UI provides all the design elements needed to build modern, beautiful, and responsive React applications, thereby cutting development time significantly.
\\nIn addition to speeding up your development time, Core UI provides beautifully handcrafted design elements that are Bootstrap-compatible. These design elements are true React components built from scratch with Bootstrap but without the jQuery dependency.
\\nFurthermore, Core UI provides both mobile and cross-browser compatibility. It also supports most of the other popular frameworks like Angular and Vue
\\nTo use Core UI, install it by running the following command:
\\nsh\\nnpm install @coreui/react\\n\\n
Then you can import and use any of the built-in React components like so:
\\njs\\nimport React from \'react\'\\nimport { CButton } from \'@coreui/react\'\\n\\nexport const ButtonExample = () => {\\n return (\\n <>\\n <CButton color=\\"primary\\">Primary</CButton>\\n </>\\n )\\n}\\n\\n
PrimeReact, built by PrimeTek Informatics, is one of the most extraordinary React UI kits that accelerates frontend design and development, featuring a collection of more than 70 components to choose from.
\\nIn addition to a wide variety of components, PrimeReact features custom themes, premium application templates, a11y, and responsive and touch-enabled UI components to deliver an excellent UI experience on any device.
\\nFor more details, check out PrimeReact on GitHub.
\\nThe kit is easy to install and use:
\\nsh\\nnpm i primereact --save\\n\\n
For icons, you can download the primeicons
library:
sh\\nnpm i primeicons --save\\n\\n
After installation, you can import and use a component like this:
\\njs\\nimport { Button } from \\"primereact/button\\";\\n\\nfunction PrimeButtonEx() {\\n return (\\n <div>\\n <Button>Button</Button>\\n </div>\\n );\\n}\\n\\n
Part design, part framework, Grommet is a UI library based in React. It features a great set of components that make it easy to get started. The library also provides powerful theming tools that allow you to tailor the component library to align with your desired layout, color, and type.
\\nThe Grommet Design Kit is a drag-and-drop tool that makes designing your layout and components a breeze. It features sticker sheets, app templates, and plenty of icons:
\\nTo set up Grommet, run the following command in your React app:
\\nsh\\nnpm i grommet\\n\\n
To use a component such as Button
, import it from the \\"grommet\\"
package:
js\\nimport { Grommet, Button } from \\"grommet\\"\\n\\nfunction GrommetButtonEx() {\\n return (\\n <Grommet className=\\"App\\">\\n <Button label=\\"Button\\" />\\n </Grommet>\\n );\\n}\\n\\n
If you want your web app to feel native, Onsen UI is the library for you. Onsen UI is designed to enrich the user experience with a mobile-like feel. It’s packed with features that provide the UI experience of native iOS and Android devices.
\\nOnsen UI’s elements and components are natively designed and perfect for developing hybrid apps and web apps. The library enables you to simulate page transitions, animations, ripple effects, and popup models — basically, any effect you would find in native Android and iOS devices:
\\nTo use Onsen in a React app, first install the npm packages:
\\nsh\\nnpm i onsenui react-onsenui --save\\n
onsenui
contains the Onsen UI core instance. react-onsenui
contains the React components:
js\\nimport { Page, Button } from \\"react-onsenui\\";\\n\\nfunction OnsenButtonEx() {\\n return (\\n <Page>\\n <Button> Click Me!!</Button>\\n </Page>\\n );\\n}\\n\\n
Then, import the Onsen CSS:
\\njs\\nimport \\"onsenui/css/onsenui.css\\"\\nimport \\"onsenui/css/onsen-css-components.css\\"\\n
I fondly refer to Onsen UI as the native CSS of the web.
\\nMUI is one of the popular React component libraries. It is based on Google’s Material Design. It is feature-rich with an extensive collection of ready-to-use components.
\\nTo install, run the following command:
\\nsh\\n# with npm\\nnpm install @mui/material @emotion/react @emotion/styled\\n\\n# with yarn\\nyarn add @mui/material @emotion/react @emotion/styled\\n\\n
Next, import the component you want to use from the @mui/material
:
js\\nimport Button from \\"@mui/material/Button\\";\\n\\nfunction MatButtonEx() {\\n return (\\n <div>\\n <Button color=\\"primary\\">Button</Button>\\n </div>\\n );\\n}\\n\\n
MUI also provides beautiful premium themes and templates you can purchase to jumpstart your project. Check out this article for a deeper dive into MUI.
\\nI am so proud of my fellow Nigerian, Segun Adebayo, for developing Chakra UI. It has a clean and neat UI and is one of the most complete React UI kits I have ever seen. Its APIs are simple but composable, and the accessibility is great.
\\nChakra UI has over 30.8K GitHub stars, and is very extensible and customizable.
\\nInside your React project, run the following command to install Chakra UI:
\\nsh\\nnpm i @chakra-ui/react @emotion/react@^11 @emotion/styled@^11 framer-motion@^4\\n# OR\\nyarn add @chakra-ui/react @emotion/react@^11 @emotion/styled@^11 framer-motion@^4\\n\\n
Chakra UI has a ChakraProvider
that we must provide at the root of our application when we want to use Chakra components:
js\\nimport * as React from \\"react\\";\\n\\n// 1. import `ChakraProvider` component\\nimport { ChakraProvider } from \\"@chakra-ui/react\\";\\n\\nfunction App({ Component }) {\\n // 2. Use at the root of your app\\n return (\\n <ChakraProvider>\\n <Component />\\n </ChakraProvider>\\n );\\n}\\n\\n
To use a component — for example, Button
— we have to import it from @chakra-ui/react
:
js\\nimport { Button, ButtonGroup } from \\"@chakra-ui/react\\";\\n\\n
Then we can render Button
like so:
js\\nfunction ChakraUIButtonEx() {\\n return (\\n <div>\\n <Button>Click Me</Button>\\n </div>\\n );\\n}\\n\\n
For more information about Chakra UI and its components, visit the official docs.
\\nAnt Design is regarded as one of the best React UI kits in the world. With over 88K stars on GitHub, it tops the list as one of the most used and downloaded React UI kits.
\\nAnt Design incorporates and promotes global design patterns and offers features like powerful theme customization, high-quality React components, and internationalization support.
\\nInstall Ant Design like so:
\\nsh\\n# npm\\nnpm install antd\\n# yarn\\nyarn add antd\\n\\n
We can import the style sheets manually:
\\njs\\nimport \'antd/dist/antd.css\';\\n\\n
We can import any component we want to use from antd
. For example, to use Button
, we would do this:
js\\nimport { Button } from \\"antd\\";\\n\\nfunction AntdEx() {\\n return <Button type=\\"primary\\">Primary Button</Button>;\\n}\\n\\n
Visit this page to see all the components in Ant Design. Ant Design also has a spin-off for Angular and a spin-off for Vue.js.
\\nSemantic UI React is the official Semantic UI integration for React. It is a complete React UI kit that is built on top of the Semantic UI CSS framework.
\\nThis Semantic UI React boasts over 100 components and offers the following robust features:
\\n<Button type=\\"primary\\" />
, we can write <Button primary />
. A prop can translate to many values. For example, the icon
props can be an icon name
, an <Icon />
instance, or an icon props objectas
props; a Header
may be rendered as an h3
element in the DOMSemantic UI React is easy to install:
\\nsh\\n# yarn\\nyarn add semantic-ui-react semantic-ui-css\\n\\n# npm\\nnpm install semantic-ui-react semantic-ui-css\\n\\n
After installation, we can then import the minified CSS file:
\\njs\\nimport \\"semantic-ui-css/semantic.min.css\\";\\n\\n
Now, let’s see how we can use an inbuilt Semantic UI component. Let’s use the Button
component:
js\\nimport React from \\"react\\";\\nimport { Button } from \\"semantic-ui-react\\";\\n\\nconst ButtonExampleButton = () => <Button>Click Here</Button>;\\n\\nexport default ButtonExampleButton;\\n\\n
To see all components in Semantic UI React, visit the official docs.
\\nBlueprint UI is a React-based UI kit for the web with over 20K stars on GitHub. It is optimized for building complex interfaces for desktop applications.
\\nInstalling Blueprint UI is very simple:
\\nsh\\nyarn add @blueprintjs/core react react-dom\\n\\n
@blueprintjs/core
is the core of the Blueprint UI kit. It contains over 40 components we can use. The react-dom
and react
packages are required for Blueprint UI to work. Additional components can be obtained from:
@blueprintjs/icons
@blueprintjs/select
@blueprintjs/datetime
@blueprintjs/table
@blueprintjs/timezone
To use a component from Blueprint UI, we’ll have to import it from @blueprintjs/core
. For example, to use the Button
component, we will have to import it from @blueprintjs/core
:
js\\nimport { Button } from \\"@blueprintjs/core\\";\\n\\n
Then we can render the Button
like so:
js\\nfunction BlueprintUIButtonEx() {\\n return (\\n <div>\\n <Button intent=\\"success\\" text=\\"button content\\">\\n Click Me\\n </Button>\\n </div>\\n );\\n}\\n\\n
Visx stands for Visual Components and is a collection of reusable, low-level visualization components developed by Airbnb. It consists of several standalone packages for building flexible visual interfaces with React.
\\n\\nVisx is open source and designed to make creating complex and interactive data visualizations easier using React components. Visx provides a set of modular, low-level building blocks for creating custom visualizations, allowing developers to have fine-grained control over the appearance and behavior of their UI.
\\nYou can install Visx with npm or yarn:
\\nsh\\n# npm\\nnpm install @visx/shape @visx/scale @visx/axis @visx/group @visx/text\\n\\n# yarn\\nyarn add @visx/shape @visx/scale @visx/axis @visx/group @visx/text\\n\\n
Fluent UI, formerly Office UI Fabric, is a set of open source, cross-platform design and user interface (UI) components and libraries developed by Microsoft. It is designed to help developers create consistent, visually appealing, and accessible user interfaces for their web and mobile applications. Fluent UI provides a comprehensive set of UI components that follow the Fluent Design System principles, such as buttons, forms, menus, and more.
\\nTo install Fluent UI, run the following code:
\\nsh\\n# with npm\\nnpm install @fluentui/react\\n\\n# with yarn\\nyarn add @fluentui/react\\n\\n
Evergreen is a design system and set of open source, React-based UI components created by Segment, a customer data platform company. Evergreen UI is designed to help developers build modern and elegant user interfaces for web applications. It provides a collection of reusable, customizable components that follow a minimalist design philosophy.
\\nEvergreen can be installed by running the code below:
\\nsh\\n# yarn\\nyarn add evergreen-ui\\n# npm\\nnpm install --save evergreen-ui\\n\\n
You can import and use components as seen below:
\\njs\\nimport { Button } from \\"evergreen-ui\\";\\n\\nfunction App() {\\n return (\\n <>\\n <Button marginLeft={10} marginRight={10}>\\n Default\\n </Button>\\n <Button marginRight={10} appearance=\\"primary\\">\\n Primary\\n </Button>\\n <Button marginRight={10} appearance=\\"minimal\\">\\n Minimal\\n </Button>\\n </>\\n );\\n}\\n\\n
Mantine is an open source React component library that provides a wide range of high-quality, customizable, and accessible UI components for building modern web applications. Mantine is designed to simplify building user interfaces in React by offering a comprehensive set of React components and utilities.
\\nInstall Mantine by running any of the code below:
\\nsh\\n# npm\\nnpm install @mantine/core @mantine/hooks\\n# yarn\\nyarn add @mantine/core @mantine/hooks\\n\\n
You can import and use components from Mantine like so:
\\njs\\nimport { Button } from \\"@mantine/core\\";\\n\\nfunction Demo() {\\n return <Button fullWidth>Full width button</Button>;\\n}\\n\\n
Headless UI is one of the React component libraries. It provides tons of unstyled and fully accessible React components. It is developed and maintained by Tailwind Labs, the developers of Tailwind CSS. Therefore, you can easily integrate it with Tailwind CSS, one of the leading CSS frameworks.
\\nGetting started with the latest version of Headless UI for React is simple and straightforward. You need to install it from the npm package registry like so:
\\nsh\\nnpm install @headlessui/react@latest\\n\\n
After successfully installing it, you can import and use its built-in components as in the example below. If you’re using it with Tailwind CSS, be sure to set it up as well.
\\njs\\nimport { Button } from \'@headlessui/react\'\\n\\nexport default function Example() {\\n return (\\n <Button className=\\"rounded-md bg-gray-600 py-1.5 px-3\\">\\n Save changes\\n </Button>\\n )\\n}\\n\\n
Hero UI, which was previously known as Next UI, is one of the fully-featured React component libraries. It was built on top of Tailwind CSS and React Aria. Under the hood, it uses Framer Motion for animation.
\\nHero UI is one of the React component libraries to look out for if you want to build accessible and aesthetically pleasing React applications. It comes with a built-in theme functionality that you can easily customize to meet your design requirements.
\\nGetting started with Hero UI is fairly straightforward. You can install the Hero UI command line tool and use it to bootstrap a React project:
\\nsh\\n# Install the command line tool\\nnpm install -g heroui-cli\\n\\n# Create a project using the command line tool\\nheroui init hero-ui-app\\n\\n
Hero UI has tons of React component libraries distributed as separate npm packages. You install each React component you want to use separately:
\\nsh\\nnpm install @heroui/button\\n\\n
After installation, you can import and use a component like so:
\\njs\\nimport { Button } from \\"@heroui/react\\";\\n\\nexport default function App() {\\n return <Button color=\\"primary\\">Button</Button>;\\n}\\n\\n
If you’re interested in other React UI libraries, check out the following:
\\nIn this guide, we reviewed a comprehensive list of React UI kits — everything from innovative newcomers to popular stalwarts. We also shared other React UI kits that are not quite popular but still pack a punch.
\\nNow you should have the basic, foundational knowledge you need to select the right UI kit for your next React project.
\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nA React UI library or React component library is a software system that comes with a collection of pre-built and reusable components — tables, charts, modals, navbars, cards, buttons, and maps that are ready to use in React applications. These components are out-of-the-box, and beautifully and uniquely styled.
\\nThese built-in React components and ready-to-use design elements reduce the need to build UI components from scratch. They can play a major role in improving your development experience, and reducing time to production.
\\nThere are countless React UI kits and libraries available today. Your choice of a particular React component library largely depends on your project and design requirements. However, you need to also consider the library’s popularity, pricing, support, community, maintenance, and licensing requirements.
\\nIn this guide, we’ll highlight 16 of the most useful kits and libraries and show how to use them in your next React app. A few of them are popular, and some are more obscure. All of them can help address the unique needs of your next React project.
\\nEditor’s note: This article was last updated by Joseph Mawa in March 2025 to add information on Headless UI for React and Hero UI, as well as provide pros/cons for each library.
\\nBuilding UI components from scratch can be tedious and sometimes futile. This is why component libraries exist; they provide ready-to-use design elements, thereby allowing developers to focus on building the UI without building everything from scratch.
\\nWhile building from scratch gives you complete control, it comes with a cost: maintainability.
\\nUsing UI library makes more sense in most cases and it brings with it the following benefits:
\\nBy providing beautiful components or design elements, UI libraries ensure that developers focus on implementing the functionality of an app, thereby speeding up the development process.
\\nFaster development time doesn’t mean developers should compromise on the look of their application. This is why UI libraries come with beautifully designed, ready-to-use components that act as the building blocks of an application.
\\nBecause the web is accessed by different people with different devices and needs, it is a huge task to build components from scratch that address your users’ accessibility needs and have the correct styles on multiple devices. UI libraries take care of these and also handle the support of older browsers.
\\nIn some cases — usually involving the use of a relatively new CSS property or browser tool — developing CSS that works with all browsers can be tricky. This can negatively affect your user’s experience. UI libraries are an effective solution for this because they have cross-browser compatibility; your application will work on all modern browsers.
\\nIn this section, we will compare the top React component libraries by summarizing their functionalities and highlighting their GitHub stars, weekly npm downloads, and newness. This comparison will help you quickly pick those that meet your project requirements.
\\nYou need to be aware that GitHub stars are similar to social media likes. They in no way reflect the quality of the software. Similarly, the weekly npm downloads are far from accurate because they include downloads from automated build servers and bots.
\\nOn the other hand, you can quickly skim through the functionality of each library, pricing, and licensing requirements so that you can identify those libraries that meet your project requirements:
\\nReact UI library | \\nGitHub stars | \\nLicensing | \\nPricing | \\nFunctions | \\nWeekly npm downloads | \\nNewness | \\n
---|---|---|---|---|---|---|
React Bootstrap | \\n22.5K | \\nMIT | \\nFree | \\njQuery-free, ready-to-use React components styled with Bootstrap | \\n1,070,903 | \\n11 years | \\n
Core UI | \\n787 | \\nMIT and Commercial Licenses | \\nFree and Paid versions | \\njQuery-free, customizable, easy to learn React.js UI components, and React.js Admin Templates. | \\n146,306 | \\n7 years | \\n
PrimeReact | \\n7.4K | \\nMIT | \\nFree | \\nRich set of open source UI components for React | \\n151,116 | \\n8 years | \\n
Grommet | \\n8.4K | \\nApache License 2.0 | \\nFree | \\nAccessibility, modularity, responsiveness, and theming | \\n33,808 | \\n10 years | \\n
Onsen UI | \\n8.8K | \\nApache License 2.0 | \\nFree | \\nNative-feeling progressive web apps (PWAs) and hybrid apps | \\n20,392 | \\n9 years | \\n
MUI | \\n94.8K | \\nMIT and Commercial Licenses | \\nFree and Paid versions | \\nReady-to-use foundational React components styled with Google’s Material Design | \\n4,971,142 | \\n11 years | \\n
Chakra UI | \\n38.5K | \\nMIT | \\nFree | \\nSimple, modular, and accessible UI Components | \\n668,703 | \\n5 years | \\n
Ant Design | \\n93.6K | \\nMIT | \\nFree | \\nA set of high-quality React components for building enterprise-class UI designed for web applications | \\n1,705,267 | \\n10 years | \\n
Semantic UI React | \\n13.3K | \\nMIT | \\nFree | \\njQuery-free, declarative API, beautifully styled React components for enterprise-class UI | \\n274,350 | \\n10 years | \\n
Blueprint UI | \\n20.9K | \\nApache License 2.0 | \\nFree | \\nOptimized for building complex, data-dense interfaces for desktop applications | \\n167,600 | \\n9 years | \\n
Visx | \\n19.7K | \\nMIT | \\nFree | \\nConsists of low-level visualization primitives for React | \\n664,000 | \\n7 years | \\n
Fluent UI | \\n18.8K | \\nMIT | \\nFree | \\nRobust React-based frontend framework/components for building web experiences | \\n115,139 | \\n5 years | \\n
Evergreen | \\n12.4K | \\nMIT | \\nFree | \\nWorks out of the box, offers server-side rendering, flexible, composable, enterprise-grade | \\n15,935 | \\n7 years | \\n
Mantine | \\n27.7K | \\nMIT | \\nFree | \\nFree and open source, usable in any project, customizable, responsive, and adaptive | \\n243,303 | \\n5 years | \\n
Headless UI for React | \\n26.8K | \\nMIT | \\nFree | \\nFree and open source. It provides unstyled and fully accessible React components you can style using Tailwind Css | \\n2,372,740 | \\n4 years | \\n
Hero UI | \\n23.1K | \\nMIT | \\nFree | \\nBuilt on top of Tailwind CSS and React Aria. You can use it to build accessible and aesthetically pleasing React applications | \\n4,223 | \\n4 years | \\n
React Bootstrap rebuilds Bootstrap — the most popular frontend framework for React — removing the unnecessary jQuery dependency.
\\nAlthough the jQuery dependency is removed, React Bootstrap embraces its Bootstrap core and works with the entire Bootstrap stylesheet. Consequently, it is compatible with many Bootstrap themes.
\\nAs one of the oldest React frameworks, React Bootstrap has evolved and matured linearly with React. Additionally, each component is implemented with accessibility in mind, so it offers a set of accessible-by-default design elements.
\\n\\nRun the following code to install React Bootstrap:
\\nsh\\nnpm install react-bootstrap bootstrap\\n\\n
You can easily import and use components like this:
\\njs\\nimport Button from \'react-bootstrap/Button\';\\n\\n// or less ideally\\nimport { Button } from \'react-bootstrap\';\\n\\n<Stack direction=\\"horizontal\\" gap={2}>\\n <Button as=\\"a\\" variant=\\"primary\\">\\n Button as link\\n </Button>\\n <Button as=\\"a\\" variant=\\"success\\">\\n Button as link\\n </Button>\\n</Stack>\\n\\n
Core UI is one of the most powerful React UI component libraries. It provides a robust collection of simple, customizable, easy-to-use React UI components and React Admin Templates. Consequently, Core UI provides all the design elements needed to build modern, beautiful, and responsive React applications, thereby cutting development time significantly.
\\nIn addition to speeding up your development time, Core UI provides beautifully handcrafted design elements that are Bootstrap-compatible. These design elements are true React components built from scratch with Bootstrap but without the jQuery dependency.
\\nFurthermore, Core UI provides both mobile and cross-browser compatibility. It also supports most of the other popular frameworks like Angular and Vue
\\nTo use Core UI, install it by running the following command:
\\nsh\\nnpm install @coreui/react\\n\\n
Then you can import and use any of the built-in React components like so:
\\njs\\nimport React from \'react\'\\nimport { CButton } from \'@coreui/react\'\\n\\nexport const ButtonExample = () => {\\n return (\\n <>\\n <CButton color=\\"primary\\">Primary</CButton>\\n </>\\n )\\n}\\n\\n
PrimeReact, built by PrimeTek Informatics, is one of the most extraordinary React UI kits that accelerates frontend design and development, featuring a collection of more than 70 components to choose from.
\\nIn addition to a wide variety of components, PrimeReact features custom themes, premium application templates, a11y, and responsive and touch-enabled UI components to deliver an excellent UI experience on any device.
\\nFor more details, check out PrimeReact on GitHub.
\\nThe kit is easy to install and use:
\\nsh\\nnpm i primereact --save\\n\\n
For icons, you can download the primeicons
library:
sh\\nnpm i primeicons --save\\n\\n
After installation, you can import and use a component like this:
\\njs\\nimport { Button } from \\"primereact/button\\";\\n\\nfunction PrimeButtonEx() {\\n return (\\n <div>\\n <Button>Button</Button>\\n </div>\\n );\\n}\\n\\n
Part design, part framework, Grommet is a UI library based in React. It features a great set of components that make it easy to get started. The library also provides powerful theming tools that allow you to tailor the component library to align with your desired layout, color, and type.
\\nThe Grommet Design Kit is a drag-and-drop tool that makes designing your layout and components a breeze. It features sticker sheets, app templates, and plenty of icons:
\\nTo set up Grommet, run the following command in your React app:
\\nsh\\nnpm i grommet\\n\\n
To use a component such as Button
, import it from the \\"grommet\\"
package:
js\\nimport { Grommet, Button } from \\"grommet\\"\\n\\nfunction GrommetButtonEx() {\\n return (\\n <Grommet className=\\"App\\">\\n <Button label=\\"Button\\" />\\n </Grommet>\\n );\\n}\\n\\n
If you want your web app to feel native, Onsen UI is the library for you. Onsen UI is designed to enrich the user experience with a mobile-like feel. It’s packed with features that provide the UI experience of native iOS and Android devices.
\\nOnsen UI’s elements and components are natively designed and perfect for developing hybrid apps and web apps. The library enables you to simulate page transitions, animations, ripple effects, and popup models — basically, any effect you would find in native Android and iOS devices:
\\nTo use Onsen in a React app, first install the npm packages:
\\nsh\\nnpm i onsenui react-onsenui --save\\n
onsenui
contains the Onsen UI core instance. react-onsenui
contains the React components:
js\\nimport { Page, Button } from \\"react-onsenui\\";\\n\\nfunction OnsenButtonEx() {\\n return (\\n <Page>\\n <Button> Click Me!!</Button>\\n </Page>\\n );\\n}\\n\\n
Then, import the Onsen CSS:
\\njs\\nimport \\"onsenui/css/onsenui.css\\"\\nimport \\"onsenui/css/onsen-css-components.css\\"\\n
I fondly refer to Onsen UI as the native CSS of the web.
\\nMUI is one of the popular React component libraries. It is based on Google’s Material Design. It is feature-rich with an extensive collection of ready-to-use components.
\\nTo install, run the following command:
\\nsh\\n# with npm\\nnpm install @mui/material @emotion/react @emotion/styled\\n\\n# with yarn\\nyarn add @mui/material @emotion/react @emotion/styled\\n\\n
Next, import the component you want to use from the @mui/material
:
js\\nimport Button from \\"@mui/material/Button\\";\\n\\nfunction MatButtonEx() {\\n return (\\n <div>\\n <Button color=\\"primary\\">Button</Button>\\n </div>\\n );\\n}\\n\\n
MUI also provides beautiful premium themes and templates you can purchase to jumpstart your project. Check out this article for a deeper dive into MUI.
\\nI am so proud of my fellow Nigerian, Segun Adebayo, for developing Chakra UI. It has a clean and neat UI and is one of the most complete React UI kits I have ever seen. Its APIs are simple but composable, and the accessibility is great.
\\nChakra UI has over 30.8K GitHub stars, and is very extensible and customizable.
\\nInside your React project, run the following command to install Chakra UI:
\\nsh\\nnpm i @chakra-ui/react @emotion/react@^11 @emotion/styled@^11 framer-motion@^4\\n# OR\\nyarn add @chakra-ui/react @emotion/react@^11 @emotion/styled@^11 framer-motion@^4\\n\\n
Chakra UI has a ChakraProvider
that we must provide at the root of our application when we want to use Chakra components:
js\\nimport * as React from \\"react\\";\\n\\n// 1. import `ChakraProvider` component\\nimport { ChakraProvider } from \\"@chakra-ui/react\\";\\n\\nfunction App({ Component }) {\\n // 2. Use at the root of your app\\n return (\\n <ChakraProvider>\\n <Component />\\n </ChakraProvider>\\n );\\n}\\n\\n
To use a component — for example, Button
— we have to import it from @chakra-ui/react
:
js\\nimport { Button, ButtonGroup } from \\"@chakra-ui/react\\";\\n\\n
Then we can render Button
like so:
js\\nfunction ChakraUIButtonEx() {\\n return (\\n <div>\\n <Button>Click Me</Button>\\n </div>\\n );\\n}\\n\\n
For more information about Chakra UI and its components, visit the official docs.
\\nAnt Design is regarded as one of the best React UI kits in the world. With over 88K stars on GitHub, it tops the list as one of the most used and downloaded React UI kits.
\\nAnt Design incorporates and promotes global design patterns and offers features like powerful theme customization, high-quality React components, and internationalization support.
\\nInstall Ant Design like so:
\\nsh\\n# npm\\nnpm install antd\\n# yarn\\nyarn add antd\\n\\n
We can import the style sheets manually:
\\njs\\nimport \'antd/dist/antd.css\';\\n\\n
We can import any component we want to use from antd
. For example, to use Button
, we would do this:
js\\nimport { Button } from \\"antd\\";\\n\\nfunction AntdEx() {\\n return <Button type=\\"primary\\">Primary Button</Button>;\\n}\\n\\n
Visit this page to see all the components in Ant Design. Ant Design also has a spin-off for Angular and a spin-off for Vue.js.
\\nSemantic UI React is the official Semantic UI integration for React. It is a complete React UI kit that is built on top of the Semantic UI CSS framework.
\\nThis Semantic UI React boasts over 100 components and offers the following robust features:
\\n<Button type=\\"primary\\" />
, we can write <Button primary />
. A prop can translate to many values. For example, the icon
props can be an icon name
, an <Icon />
instance, or an icon props objectas
props; a Header
may be rendered as an h3
element in the DOMSemantic UI React is easy to install:
\\nsh\\n# yarn\\nyarn add semantic-ui-react semantic-ui-css\\n\\n# npm\\nnpm install semantic-ui-react semantic-ui-css\\n\\n
After installation, we can then import the minified CSS file:
\\njs\\nimport \\"semantic-ui-css/semantic.min.css\\";\\n\\n
Now, let’s see how we can use an inbuilt Semantic UI component. Let’s use the Button
component:
js\\nimport React from \\"react\\";\\nimport { Button } from \\"semantic-ui-react\\";\\n\\nconst ButtonExampleButton = () => <Button>Click Here</Button>;\\n\\nexport default ButtonExampleButton;\\n\\n
To see all components in Semantic UI React, visit the official docs.
\\nBlueprint UI is a React-based UI kit for the web with over 20K stars on GitHub. It is optimized for building complex interfaces for desktop applications.
\\nInstalling Blueprint UI is very simple:
\\nsh\\nyarn add @blueprintjs/core react react-dom\\n\\n
@blueprintjs/core
is the core of the Blueprint UI kit. It contains over 40 components we can use. The react-dom
and react
packages are required for Blueprint UI to work. Additional components can be obtained from:
@blueprintjs/icons
@blueprintjs/select
@blueprintjs/datetime
@blueprintjs/table
@blueprintjs/timezone
To use a component from Blueprint UI, we’ll have to import it from @blueprintjs/core
. For example, to use the Button
component, we will have to import it from @blueprintjs/core
:
js\\nimport { Button } from \\"@blueprintjs/core\\";\\n\\n
Then we can render the Button
like so:
js\\nfunction BlueprintUIButtonEx() {\\n return (\\n <div>\\n <Button intent=\\"success\\" text=\\"button content\\">\\n Click Me\\n </Button>\\n </div>\\n );\\n}\\n\\n
Visx stands for Visual Components and is a collection of reusable, low-level visualization components developed by Airbnb. It consists of several standalone packages for building flexible visual interfaces with React.
\\n\\nVisx is open source and designed to make creating complex and interactive data visualizations easier using React components. Visx provides a set of modular, low-level building blocks for creating custom visualizations, allowing developers to have fine-grained control over the appearance and behavior of their UI.
\\nYou can install Visx with npm or yarn:
\\nsh\\n# npm\\nnpm install @visx/shape @visx/scale @visx/axis @visx/group @visx/text\\n\\n# yarn\\nyarn add @visx/shape @visx/scale @visx/axis @visx/group @visx/text\\n\\n
Fluent UI, formerly Office UI Fabric, is a set of open source, cross-platform design and user interface (UI) components and libraries developed by Microsoft. It is designed to help developers create consistent, visually appealing, and accessible user interfaces for their web and mobile applications. Fluent UI provides a comprehensive set of UI components that follow the Fluent Design System principles, such as buttons, forms, menus, and more.
\\nTo install Fluent UI, run the following code:
\\nsh\\n# with npm\\nnpm install @fluentui/react\\n\\n# with yarn\\nyarn add @fluentui/react\\n\\n
Evergreen is a design system and set of open source, React-based UI components created by Segment, a customer data platform company. Evergreen UI is designed to help developers build modern and elegant user interfaces for web applications. It provides a collection of reusable, customizable components that follow a minimalist design philosophy.
\\nEvergreen can be installed by running the code below:
\\nsh\\n# yarn\\nyarn add evergreen-ui\\n# npm\\nnpm install --save evergreen-ui\\n\\n
You can import and use components as seen below:
\\njs\\nimport { Button } from \\"evergreen-ui\\";\\n\\nfunction App() {\\n return (\\n <>\\n <Button marginLeft={10} marginRight={10}>\\n Default\\n </Button>\\n <Button marginRight={10} appearance=\\"primary\\">\\n Primary\\n </Button>\\n <Button marginRight={10} appearance=\\"minimal\\">\\n Minimal\\n </Button>\\n </>\\n );\\n}\\n\\n
Mantine is an open source React component library that provides a wide range of high-quality, customizable, and accessible UI components for building modern web applications. Mantine is designed to simplify building user interfaces in React by offering a comprehensive set of React components and utilities.
\\nInstall Mantine by running any of the code below:
\\nsh\\n# npm\\nnpm install @mantine/core @mantine/hooks\\n# yarn\\nyarn add @mantine/core @mantine/hooks\\n\\n
You can import and use components from Mantine like so:
\\njs\\nimport { Button } from \\"@mantine/core\\";\\n\\nfunction Demo() {\\n return <Button fullWidth>Full width button</Button>;\\n}\\n\\n
Headless UI is one of the React component libraries. It provides tons of unstyled and fully accessible React components. It is developed and maintained by Tailwind Labs, the developers of Tailwind CSS. Therefore, you can easily integrate it with Tailwind CSS, one of the leading CSS frameworks.
\\nGetting started with the latest version of Headless UI for React is simple and straightforward. You need to install it from the npm package registry like so:
\\nsh\\nnpm install @headlessui/react@latest\\n\\n
After successfully installing it, you can import and use its built-in components as in the example below. If you’re using it with Tailwind CSS, be sure to set it up as well.
\\njs\\nimport { Button } from \'@headlessui/react\'\\n\\nexport default function Example() {\\n return (\\n <Button className=\\"rounded-md bg-gray-600 py-1.5 px-3\\">\\n Save changes\\n </Button>\\n )\\n}\\n\\n
Hero UI, which was previously known as Next UI, is one of the fully-featured React component libraries. It was built on top of Tailwind CSS and React Aria. Under the hood, it uses Framer Motion for animation.
\\nHero UI is one of the React component libraries to look out for if you want to build accessible and aesthetically pleasing React applications. It comes with a built-in theme functionality that you can easily customize to meet your design requirements.
\\nGetting started with Hero UI is fairly straightforward. You can install the Hero UI command line tool and use it to bootstrap a React project:
\\nsh\\n# Install the command line tool\\nnpm install -g heroui-cli\\n\\n# Create a project using the command line tool\\nheroui init hero-ui-app\\n\\n
Hero UI has tons of React component libraries distributed as separate npm packages. You install each React component you want to use separately:
\\nsh\\nnpm install @heroui/button\\n\\n
After installation, you can import and use a component like so:
\\njs\\nimport { Button } from \\"@heroui/react\\";\\n\\nexport default function App() {\\n return <Button color=\\"primary\\">Button</Button>;\\n}\\n\\n
If you’re interested in other React UI libraries, check out the following:
\\nIn this guide, we reviewed a comprehensive list of React UI kits — everything from innovative newcomers to popular stalwarts. We also shared other React UI kits that are not quite popular but still pack a punch.
\\nNow you should have the basic, foundational knowledge you need to select the right UI kit for your next React project.
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nChoosing between TypeScript and JavaScript depends on your project’s complexity, team structure, and long-term goals.
\\nJavaScript is flexible, lightweight, and ideal for quick prototyping or small projects. TypeScript, with its static typing and advanced tooling, helps catch errors early and improves maintainability for large-scale applications.
\\nIf you’re working on a fast-moving prototype, JavaScript’s simplicity may be the better fit. But if you’re building an enterprise-level application where stability and collaboration matter, TypeScript is often the smarter choice.
\\nTypeScript vs. JavaScript: which one should you choose? The decision isn’t always straightforward. This article will break down the technical and practical differences between TypeScript and JavaScript, complete with code comparisons, ecosystem analysis, and real-world case studies.
\\nHere’s a quick summary of what we’ll discuss:
\\nCriteria | \\nJavaScript is best for… | \\nTypeScript is best for… | \\n
Project size | \\nSmall projects, quick prototypes, and simple web applications | \\nEnterprise-level applications with long-term maintenance in mind | \\n
Development workflow | \\nYou need rapid iteration without a compilation step | \\nYou need strict typing, better tooling, and enhanced code maintainability | \\n
Team collaboration | \\nSolo projects or small teams where code consistency is less critical | \\nLarger teams where enforcing strict type safety improves collaboration | \\n
In early 2021, a fintech startup I consulted for faced a critical production outage. A seemingly harmless JavaScript function failed silently because a date
string was passed where a number
timestamp was expected. The bug led to incorrect transaction processing and cost the company $18,000 in lost revenue. After migrating to TypeScript, similar errors were caught at compile time, reducing runtime failures by 70% in their next release.
Yet, TypeScript isn’t always the answer. Last year, when I built a real-time multiplayer game prototype, Vanilla JavaScript’s rapid iteration let me test ideas without wrestling with type definitions. It allowed me to quickly tweak mechanics and experiment without the overhead of a compile step.
\\nNow that you have some background, let’s explore the particulars of both JavaScript and TypeScript, before comparing their features and capabilities.
\\nJavaScript is a high-level, interpreted programming language that enables interactive web development. Created in 1995 by Brendan Eich, JavaScript quickly became an essential part of web development, allowing developers to build dynamic and responsive web applications.
\\nInitially used for client-side scripting, JavaScript has expanded to server-side development (Node.js), mobile app development (React Native), and even game development.
\\nfunction greet(name) {\\n return \\"Hello, \\" + name;\\n}\\n\\nconsole.log(greet(\\"Alice\\")); // Output: Hello, Alice\\n
This simple function takes a name and returns a greeting message. However, since JavaScript is dynamically typed, passing a non-string value could lead to unexpected behavior:
\\nconsole.log(greet(42)); // Output: Hello, 42\\n
Without type checking, JavaScript does not enforce correct data types, which can lead to potential bugs.
\\nJavaScript is ideal for:
\\nWhile JavaScript is versatile, it can become difficult to manage as projects grow, which is where TypeScript comes in.
\\nTypeScript is an open-source, strongly typed programming language developed by Microsoft. It is a superset of JavaScript, meaning any JavaScript code is valid TypeScript. However, TypeScript introduces static typing, interfaces, and improved tooling to enhance code maintainability and scalability.
\\nUnlike JavaScript, TypeScript code is compiled into JavaScript before execution, ensuring that potential errors are caught during development rather than at runtime.
\\nfunction greet(name: string): string {\\n return `Hello, ${name}`;\\n}\\n\\nconsole.log(greet(\\"Alice\\")); // Output: Hello, Alice\\n\\n// The following line will cause a compilation error:\\n// console.log(greet(42));\\n
Since TypeScript enforces strict typing, passing a number instead of a string would result in a compile-time error, preventing potential runtime issues.
\\n\\nTypeScript is best suited for:
\\nAlthough TypeScript adds a compilation step, its benefits in terms of code quality and maintainability make it a preferred choice for many developers.
\\nFeature | \\nJavaScript (JS) | \\nTypeScript (TS) | \\n
Typing system | \\nDynamically typed; variable types are determined at runtime | \\nStatically typed; types must be explicitly declared | \\n
Compilation | \\nInterpreted at runtime; no compilation step | \\nCompiles to JavaScript before execution | \\n
Error handling | \\nErrors appear at runtime, which can cause unexpected behavior | \\nErrors are caught at compile-time, reducing runtime issues | \\n
Code maintainability | \\nCan become hard to manage in large projects due to lack of type enforcement | \\nEasier to maintain and refactor with type safety and better tooling | \\n
Object-Oriented Programming (OOP) | \\nUses prototype-based inheritance | \\nSupports class-based OOP with interfaces and generics | \\n
Tooling support | \\nBasic IDE support; lacks advanced autocomplete and refactoring tools | \\nProvides better IDE support with IntelliSense, autocompletion, and refactoring tools | \\n
Use case suitability | \\nBest for small projects, quick prototyping, and web applications that don’t require strict type safety | \\nIdeal for large-scale applications, enterprise projects, and collaborative development | \\n
JavaScript is dynamically typed, meaning variable types are determined at runtime. TypeScript, on the other hand, enforces static typing, allowing developers to specify data types explicitly.
\\nlet message = \\"Hello\\";\\nmessage = 42; // No error in JavaScript, but this could cause unexpected issues.\\n
let message: string = \\"Hello\\";\\nmessage = 42; // TypeScript error: Type \'number\' is not assignable to type \'string\'.\\n
In JavaScript, errors related to data types often occur at runtime, making debugging more difficult. TypeScript catches these errors at compile time, preventing them from affecting production. This results in fewer runtime exceptions and better overall stability for applications.
\\nfunction add(a, b) {\\n return a + b;\\n}\\nconsole.log(add(5, \\"10\\")); // Output: \\"510\\" (unexpected behavior)\\n
function add(a: number, b: number): number {\\n return a + b;\\n}\\nconsole.log(add(5, \\"10\\")); // Compilation error: Argument of type \'string\' is not assignable to parameter of type \'number\'.\\n
For large applications, TypeScript’s static typing and better tooling make code easier to maintain and refactor. JavaScript, being dynamically typed, can become harder to manage as the codebase grows.
\\nJavaScript supports prototype-based inheritance, while TypeScript offers a more structured, class-based OOP approach with interfaces and generics.
\\nfunction Person(name) {\\n this.name = name;\\n}\\nPerson.prototype.greet = function() {\\n return `Hello, my name is ${this.name}`;\\n};\\n\\nlet person = new Person(\\"Alice\\");\\nconsole.log(person.greet());\\n
class Person {\\n constructor(private name: string) {}\\n greet(): string {\\n return `Hello, my name is ${this.name}`;\\n }\\n}\\n\\nlet person = new Person(\\"Alice\\");\\nconsole.log(person.greet());\\n
TypeScript’s class-based approach improves readability and maintainability, making it more suitable for enterprise applications.
\\nFor large applications, TypeScript’s static typing and better tooling make code easier to maintain and refactor. JavaScript, being dynamically typed, can become harder to manage as the codebase grows.
\\nTransitioning from JavaScript to TypeScript can be done incrementally:
\\n.js
to .ts
and enable TypeScript features graduallytsconfig.json
file to configure the TypeScript compilerany
for unknown types initially and refine them over timeExample tsconfig.json
file:
{\\n \\"compilerOptions\\": {\\n \\"target\\": \\"ES6\\",\\n \\"strict\\": true,\\n \\"outDir\\": \\"./dist\\",\\n \\"rootDir\\": \\"./src\\"\\n }\\n}\\n
Both JavaScript and TypeScript have their strengths and weaknesses. JavaScript’s flexibility makes it great for quick development and small projects, while TypeScript’s static typing ensures better maintainability and scalability for large applications.
\\nFor developers looking to build robust, error-free applications with enhanced tooling, TypeScript is the clear winner. However, if your project requires rapid prototyping or has minimal complexity, JavaScript remains a solid choice.
\\nUltimately, the decision comes down to your project’s requirements, team size, and long-term goals. Whether you choose JavaScript or TypeScript, understanding their differences will help you make an informed decision and improve your development workflow.
\\nIn my own work, I’ve seen how the wrong choice can ripple into real consequences, lost revenue, broken user experiences or wasted time in production. But I’ve also seen how the right one can unlock endless possibilities, speed, efficiency, and confidence in every line of code. That’s the real takeaway; it’s not just about picking a language, but making a decision that sets your product up for success.
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nCreatable
component\\n styles
prop\\n classNames
prop\\n classNamePrefix
prop\\n unstyled
prop\\n React Select is an open source select control created by Jed Watson with and for React. It was inspired by the shortcomings of the native HTML select control. It offers well-designed and battle-tested components and APIs that help you build powerful yet customizable select components. Some of its features include:
\\nIn this tutorial, we’ll walk through how to install, use, and customize React Select for modern React projects. We’ll also dive into various configuration options to tailor the component to our specific needs.
\\nIf you’re using an older version, you should upgrade using the upgrade guide. You can also check out our video tutorial on React Select.
\\nEditor’s note: This article was last updated by Nelson Michael in March 2025.
\\nThe native HTML <select>
element has several limitations:
<select>
does support keyboard navigation, its functionality is basic and less customizable.React Select addresses these limitations while providing:
\\nReact Select works with any React framework. To install the react-select
package, run either one of the following commands in your terminal:
npm install react-select\\n# OR\\nyarn add react-select\\n# OR\\npnpm install react-select\\n\\n
Using React Select is as easy as adding the canonical Select
component and passing it some vital props such as options
, onChange
, and defaultValue
:
import Select from \'react-select\';\\nimport { useState } from \'react\';\\n\\ninterface Option {\\n value: string;\\n label: string;\\n}\\n\\nconst options: Array<Option> = [\\n { value: \'blues\', label: \'Blues\' },\\n { value: \'rock\', label: \'Rock\' },\\n { value: \'jazz\', label: \'Jazz\' },\\n { value: \'orchestra\', label: \'Orchestra\' }\\n];\\n\\nexport default function MusicGenreSelect() {\\n const [selectedOption, setSelectedOption] = useState<Option | null>(null);\\n\\n return (\\n <Select<Option>\\n value={selectedOption}\\n onChange={(option) => setSelectedOption(option)}\\n options={options}\\n isClearable\\n isSearchable\\n placeholder=\\"Select a music genre...\\"\\n aria-label=\\"Music genre selector\\"\\n />\\n );\\n}\\n\\n
In the code snippet above, the select options are defined as music genres and passed into the Select
component as props. defaultValue
and onChange
are wired to the stateful value selectedOption
and its updater function, setSelectedOption
. The result is a simple Select
component:
Props are essential to how React Select works. They are also essential to customizing it. Apart from the props we passed in our first example, here are some common props you can pass to the Select
component:
placeholder: Defines the text displayed in the text input\\nclassName: Sets a className attribute on the outer or root component\\nclassNamePrefix: If provided, all inner components will be given a prefixed className attribute\\nautoFocus: Focuses the control when it is mounted\\nisMulti: Supports multiple selected options\\nnoOptionsMessage: Text to display when there are no options found\\nmenuIsOpen: Opens the dropdown menu by default\\nisLoading: Useful for async operations. For example, to indicate a loading state during a search\\n<Select\\n {...props}\\n placeholder=\\"Select music genre\\"\\n className=\\"adebiyi\\"\\n classNamePrefix=\\"logrocket\\"\\n autoFocus\\n isMulti\\n noOptionsMessage={({ inputValue }) => `No result found for \\"${inputValue}\\"`}\\n/>\\n\\n
React Select can be configured to allow multiple options to be selected in a single Select
component. This can be achieved by toggling on the isMulti
prop on the Select
component:
import Select, { MultiValue } from \\"react-select\\";\\nimport { useState } from \\"react\\";\\n\\nconst options = [\\n { value: \\"blues\\", label: \\"Blues\\" },\\n { value: \\"rock\\", label: \\"Rock\\" },\\n { value: \\"jazz\\", label: \\"Jazz\\" },\\n { value: \\"orchestra\\", label: \\"Orchestra\\" },\\n ];\\n\\nexport default function App() {\\n // We now have multiple options. Basically, an array of options.\\n const [selectedOptions, setSelectedOptions] = useState<MultiValue<{\\n value: string;\\n label: string;\\n }> | null>(null);\\n\\n return (\\n <div>\\n <Select\\n defaultValue={selectedOptions}\\n onChange={setSelectedOptions}\\n options={options}\\n isMulti\\n />\\n </div>\\n );\\n}\\n\\n
You can also do some styling customization on the multi-select dropdown. Here’s how:
\\nimport Select from \'react-select\';\\nimport { useState } from \'react\';\\n\\ninterface Tag {\\n value: string;\\n label: string;\\n color: string;\\n}\\n\\nconst customStyles = {\\n control: (base: any, state: any) => ({\\n ...base,\\n borderColor: state.isFocused ? \'#2684FF\' : \'#ced4da\',\\n boxShadow: state.isFocused ? \'0 0 0 1px #2684FF\' : \'none\',\\n \'&:hover\': {\\n borderColor: state.isFocused ? \'#2684FF\' : \'#a1a7ae\'\\n }\\n }),\\n multiValue: (base: any, { data }: any) => ({\\n ...base,\\n backgroundColor: data.color,\\n color: \'#fff\'\\n }),\\n multiValueLabel: (base: any) => ({\\n ...base,\\n color: \'inherit\'\\n })\\n};\\n\\nfunction TagSelector() {\\n const [selectedTags, setSelectedTags] = useState<Tag[]>([]);\\n\\n const options: Tag[] = [\\n { value: \'react\', label: \'React\', color: \'#61dafb\' },\\n { value: \'typescript\', label: \'TypeScript\', color: \'#3178c6\' },\\n { value: \'javascript\', label: \'JavaScript\', color: \'#f7df1e\' }\\n ];\\n\\n return (\\n <Select<Tag, true>\\n isMulti\\n options={options}\\n value={selectedTags}\\n onChange={(newValue) => setSelectedTags(newValue as Tag[])}\\n styles={customStyles}\\n placeholder=\\"Select tags...\\"\\n closeMenuOnSelect={false}\\n />\\n );\\n}\\n\\n
React Select’s options
props can be static and preselected, as shown in previous examples. They can also be dynamic and asynchronous; that is, generated on demand from an API or a database query. For this use case, React Select offers the Async
component from react-select/async
:
import AsyncSelect from \'react-select/async\';\\nimport { useState } from \'react\';\\n\\ninterface User {\\n value: string;\\n label: string;\\n}\\n\\nfunction UserSelect() {\\n const [selectedUser, setSelectedUser] = useState<User | null>(null);\\n\\n const loadOptions = async (inputValue: string) => {\\n try {\\n const response = await fetch(\\n `https://api.example.com/users?search=${inputValue}`\\n );\\n const data = await response.json();\\n\\n return data.map((user: any) => ({\\n value: user.id,\\n label: user.name\\n }));\\n } catch (error) {\\n console.error(\'Error loading options:\', error);\\n return [];\\n }\\n };\\n\\n return (\\n <AsyncSelect<User>\\n value={selectedUser}\\n loadOptions={loadOptions}\\n onChange={setSelectedUser}\\n isSearchable\\n placeholder=\\"Search users...\\"\\n loadingMessage={() => \\"Searching...\\"}\\n noOptionsMessage={({ inputValue }) => \\n inputValue ? `No users found for \\"${inputValue}\\"` : \\"Start typing to search...\\"\\n }\\n />\\n );\\n}\\n\\n
The Async
component extends the Select
component with asynchronous features like loading state.
The loadOptions
prop is an async function or a promise that exposes the search text (input value) and a callback that is automatically called once the input value changes.
The Async
component includes provision for helpful props like:
cacheOptions: Caching fetched options\\ndefaultOptions: Set default options before the remote options are loaded\\n\\n
Another component that may come in handy is the Fixed Options component, which makes it possible to have fixed options.
\\nIn some scenarios, you might want certain selections to remain permanent in a multi-select dropdown — these are “fixed” options that users should not be able to remove.
\\nIn this example, we use React Select’s customization capabilities to style these fixed options distinctively and enforce their permanence through our change handler logic. Here’s an example from the docs:
\\nimport React, { useState } from \'react\';\\nimport Select, { ActionMeta, OnChangeValue, StylesConfig } from \'react-select\';\\nimport { ColourOption, colourOptions } from \'../data\';\\n\\n// Custom styles to visually differentiate fixed options\\nconst customStyles: StylesConfig<ColourOption, true> = {\\n multiValue: (base, state) =>\\n state.data.isFixed ? { ...base, backgroundColor: \'gray\' } : base,\\n multiValueLabel: (base, state) =>\\n state.data.isFixed\\n ? { ...base, fontWeight: \'bold\', color: \'white\', paddingRight: 6 }\\n : base,\\n multiValueRemove: (base, state) =>\\n state.data.isFixed ? { ...base, display: \'none\' } : base,\\n};\\n\\n// Helper function to always position fixed options before non-fixed ones\\nconst orderOptions = (values: readonly ColourOption[]): readonly ColourOption[] => {\\n return values.filter(v => v.isFixed).concat(values.filter(v => !v.isFixed));\\n};\\n\\nexport default function FixedOptionsExample() {\\n // Initialize with a set of fixed and non-fixed options\\n const [selectedOptions, setSelectedOptions] = useState<readonly ColourOption[]>(\\n orderOptions([colourOptions[0], colourOptions[1], colourOptions[3]])\\n );\\n\\n // Custom change handler to prevent removal of fixed options\\n const handleChange = (\\n newValue: OnChangeValue<ColourOption, true>,\\n actionMeta: ActionMeta<ColourOption>\\n ) => {\\n switch (actionMeta.action) {\\n case \'remove-value\':\\n case \'pop-value\':\\n // Prevent removal if the option is fixed\\n if (actionMeta.removedValue.isFixed) {\\n return;\\n }\\n break;\\n case \'clear\':\\n // When clearing the selection, preserve only fixed options\\n newValue = colourOptions.filter(v => v.isFixed);\\n break;\\n }\\n // Reorder options to always show fixed ones first\\n setSelectedOptions(orderOptions(newValue));\\n };\\n\\n return (\\n <Select\\n value={selectedOptions}\\n isMulti\\n styles={customStyles}\\n isClearable={selectedOptions.some(v => !v.isFixed)}\\n name=\\"colors\\"\\n className=\\"basic-multi-select\\"\\n classNamePrefix=\\"select\\"\\n onChange={handleChange}\\n options={colourOptions}\\n />\\n );\\n}\\n\\n
Creatable
componentTypically, there is a dead end when there are no options after a search. However, you can choose to let users create a new option. For this use case, React Select offers the Creatable
component for static options and AsyncCreatable
components for dynamic options.
Using Creatable
is the same as using Select
:
import Creatable from \\"react-select/creatable\\";\\nimport { useState } from \\"react\\";\\n\\nconst musicGenres = [\\n { value: \\"blues\\", label: \\"Blues\\" },\\n { value: \\"rock\\", label: \\"Rock\\" },\\n { value: \\"jazz\\", label: \\"Jazz\\" },\\n { value: \\"orchestra\\", label: \\"Orchestra\\" },\\n];\\n\\nexport default function App() {\\n const [selectedOption, setSelectedOption] = useState(null);\\n\\n return (\\n <>\\n <div style={{ marginBlockEnd: \\"1rem\\", display: \\"flex\\" }}>\\n <span>Selected option:</span>\\n <pre> {JSON.stringify(selectedOption)} </pre>\\n </div>\\n <Creatable options={musicGenres} onChange={setSelectedOption} isMulti />\\n </>\\n );\\n}\\n\\n
And using AsyncCreatable
is the same as using Async
:
import AsyncCreatable from \\"react-select/async-creatable\\";\\nimport { useState } from \\"react\\";\\n\\nconst musicGenres = [\\n { value: \\"blues\\", label: \\"Blues\\" },\\n { value: \\"rock\\", label: \\"Rock\\" },\\n { value: \\"jazz\\", label: \\"Jazz\\" },\\n { value: \\"orchestra\\", label: \\"Orchestra\\" },\\n];\\n\\nfunction filterMusicGenre(inputValue) {\\n return musicGenres.filter((musicGenre) => {\\n const regex = new RegExp(inputValue, \\"gi\\");\\n return musicGenre.label.match(regex);\\n });\\n}\\n\\nexport default function App() {\\n const [selectedOption, setSelectedOption] = useState(null);\\n return (\\n <>\\n <AsyncCreatable\\n loadOptions={(inputValue, callback) =>\\n setTimeout(() => callback(filterMusicGenre(inputValue)), 1000)\\n }\\n onChange={setSelectedOption}\\n isMulti\\n isClearable\\n />\\n </>\\n );\\n}\\n\\n
Integrating React Select with React Hook Form simplifies managing form state and validation. The example below shows how to use the Controller
component from React Hook Form to integrate a React Select component seamlessly into your form:
import React from \'react\';\\nimport { useForm, Controller } from \'react-hook-form\';\\nimport Select from \'react-select\';\\n\\ninterface Option {\\n value: string;\\n label: string;\\n}\\n\\ninterface FormData {\\n category: Option | null;\\n}\\n\\nconst options: Option[] = [\\n { value: \'news\', label: \'News\' },\\n { value: \'sports\', label: \'Sports\' },\\n { value: \'entertainment\', label: \'Entertainment\' }\\n];\\n\\nfunction FormSelect() {\\n const { control, handleSubmit } = useForm<FormData>();\\n\\n const onSubmit = (data: FormData) => {\\n console.log(data.category);\\n };\\n\\n return (\\n <form onSubmit={handleSubmit(onSubmit)}>\\n <Controller\\n name=\\"category\\"\\n control={control}\\n rules={{ required: \'Please select a category\' }}\\n render={({ field, fieldState: { error } }) => (\\n <div>\\n <Select\\n {...field}\\n options={options}\\n isClearable\\n placeholder=\\"Select a category...\\"\\n />\\n {error && <span className=\\"error\\">{error.message}</span>}\\n </div>\\n )}\\n />\\n <button type=\\"submit\\">Submit</button>\\n </form>\\n );\\n}\\n\\n
For large datasets or frequent updates, optimizing React Select’s performance is crucial. This example demonstrates how to use memoization with useMemo
and useCallback
to ensure that expensive operations and custom filtering are executed efficiently:
import React, { useMemo, useCallback } from \'react\';\\nimport Select from \'react-select\';\\n\\nfunction generateLargeOptionsList() {\\n // Example: generate a list of options dynamically\\n return Array.from({ length: 1000 }, (_, i) => ({\\n value: `option-${i}`,\\n label: `Option ${i}`\\n }));\\n}\\n\\nfunction CustomOption(props: any) {\\n // Custom option component logic\\n return <div {...props.innerProps}>{props.data.label}</div>;\\n}\\n\\nfunction CustomMultiValue(props: any) {\\n // Custom multi-value component logic\\n return <div {...props.innerProps}>{props.data.label}</div>;\\n}\\n\\nfunction OptimizedSelect() {\\n const options = useMemo(() => generateLargeOptionsList(), []);\\n\\n const filterOptions = useCallback((inputValue: string) => {\\n return options.filter(option =>\\n option.label.toLowerCase().includes(inputValue.toLowerCase())\\n );\\n }, [options]);\\n\\n const customComponents = useMemo(() => ({\\n Option: CustomOption,\\n MultiValue: CustomMultiValue\\n }), []);\\n\\n return (\\n <Select\\n options={options}\\n filterOption={filterOptions}\\n components={customComponents}\\n isSearchable\\n isClearable\\n />\\n );\\n}\\n\\n
React Select also exposes several events to manage your select components (Select
, Async
, etc.). You’ve seen onChange
and autoFocus
. Some others include:
onBlur\\nonMenuOpen\\nonMenuClose\\nonInputChange\\nonMenuScrollToBottom\\nonMenuScrollToTop\\n\\n
These events are describable by name and are fairly straightforward to understand. For example, you could use onBlur
to validate the select component. Additionally, if you have a long list of options, you can detect when the menu is scrolled to the bottom or top using onMenuScrollToBottom
and onMenuScrollToTop
.
Each of these events will expose the event to the callback function as in the case of onBlur
in the code snippet below:
<Select\\n {...props}\\n onMenuOpen={() => console.log(\\"Menu is open\\")}\\n onMenuClose={() => console.log(\\"Menu is close\\")}\\n onBlur={(e) => console.log(e)}\\n onMenuScrollToBottom={() =>\\n console.log(\\"Menu was scrolled to the bottom.\\")\\n }\\n/>\\n\\n
The Select
component is composed of other child components, each with base styles that can be extended or overridden distinctly. These are components like control
, placeholder
, options
, noOptionsMessage
, etc:
There are three APIs for styling these components: the styles
prop, the classNames
prop, and the classNamePrefix
prop.
styles
propYou can pass an object of callback functions to the styles
prop. Each callback function represents a child component of Select
, and automatically exposes the corresponding base or default styling and state.
\\nN.B., you don’t have to expressly name the function arguments “defaultStyles” and “state.”
import Select from \\"react-select\\";\\nimport { useState } from \\"react\\";\\n\\nconst options = [\\n { value: \\"blues\\", label: \\"Blues\\" },\\n { value: \\"rock\\", label: \\"Rock\\" },\\n { value: \\"jazz\\", label: \\"Jazz\\" },\\n { value: \\"orchestra\\", label: \\"Orchestra\\" },\\n];\\n\\nconst customStyles = {\\n option: (defaultStyles, state) => ({\\n // You can log the defaultStyles and state for inspection\\n // You don\'t need to spread the defaultStyles\\n ...defaultStyles,\\n color: state.isSelected ? \\"#212529\\" : \\"#fff\\",\\n backgroundColor: state.isSelected ? \\"#a0a0a0\\" : \\"#212529\\",\\n }),\\n\\n control: (defaultStyles) => ({\\n ...defaultStyles,\\n // Notice how these are all CSS properties\\n backgroundColor: \\"#212529\\",\\n padding: \\"10px\\",\\n border: \\"none\\",\\n boxShadow: \\"none\\",\\n }),\\n singleValue: (defaultStyles) => ({ ...defaultStyles, color: \\"#fff\\" }),\\n};\\n\\nexport default function App() {\\n const [selectedOption, setSelectedOption] = useState(null);\\n\\n return (\\n <div>\\n <Select\\n defaultValue={selectedOption}\\n onChange={setSelectedOption}\\n options={options}\\n styles={customStyles}\\n />\\n </div>\\n );\\n}\\n\\n
In the code below, the Select
component has been styled to have a dark appearance using the control
, option
, and singleValue
child components. Here is the result:
classNames
propWith the classNames
props, you can add class names to each child component like so:
<Select\\n {...props}\\n classNames={{\\n control: (state) =>\\n `border ${state.isFocused ? \\"border-red-800\\" : \\"border-red-400\\"}`,\\n option: () => \\"menu-item\\",\\n }}\\n/>\\n\\n
In the code snippet above, the control
component’s border is styled with respective class names based on the isFocused
state of the Select
component. This is typically how you’d use Tailwind CSS with React Select.
classNamePrefix
propWhile the className
prop is used to apply a class name on the root element of the Select
component, the classNamePrefix
is used to namespace every child component:
<Select\\n defaultValue={selectedOption}\\n onChange={setSelectedOption}\\n options={options}\\n className=\\"for-root-component\\"\\n classNamePrefix=\\"for-child-components\\"\\n/>\\n\\n
The code snippet above, with className
and classNamePrefix
, will generate a DOM structure similar to this:
<div class=\\"for-root-component react-select-container\\">\\n <div class=\\"for-child-components__control\\">\\n <div class=\\"for-child-components__value-container\\">...</div>\\n <div class=\\"for-child-components__indicators\\">...</div>\\n </div>\\n <div class=\\"for-child-components__menu\\">\\n <div class=\\"for-child-components__menu-list\\">\\n <div class=\\"for-child-components__option\\">...</div>\\n </div>\\n </div>\\n</div>\\n\\n
You can then target each distinct class name property for styling, for example, in a .css
file.
unstyled
propIf you need to completely restyle the Select
component, you can apply the unstyled
prop to strip it clean to only the essentials, like so:
<Select\\n {...props}\\n unstyled\\n/>\\n\\n
Then you can use one of the three styling APIs mentioned above to restyle Select
:
Select props\\n\\n
If you use either one of the styles
or classNames
APIs, you can get access to any custom prop you pass to the Select
component through the state
argument, like so:
<Select\\n {...props}\\n customProps={true} // You can pass a custom prop...\\n styles={{\\n control: (defaultStyles, state) => {\\n // ...then access the props through `selectProps`\\n // You can use it to style the component\\n console.log(state.selectProps[\\"customProps\\"]);\\n return {\\n ...defaultStyles,\\n color: state.isSelected ? \\"#212529\\" : \\"#fff\\",\\n backgroundColor: state.isSelected ? \\"#a0a0a0\\" : \\"#212529\\",\\n };\\n },\\n }}\\n/>\\n\\n
Effectively styling the Select
requires that you know the component(s) you intend to style and choose one of the styling APIs above to achieve your goal. If you break a component down for your bare metal needs, let cx and custom components be your styling guide.
React Select is a powerful component that can significantly enhance your application’s user experience. By following this guide and implementing the examples above, you can create accessible, performant, and feature-rich select components that meet modern web application requirements.
\\n\\nFor more advanced use cases and detailed API documentation, visit the official React Select documentation. If you’re evaluating different select libraries for your project, check out our guide to the best React Select component libraries.
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\n{ pure }
HOC from Recompose\\n React.memo
\\n PureComponent
class component to a function\\n PureComponent
and React.Memo
\\n As React applications grow in complexity, performance optimization becomes a priority to prevent performance decline. React provides two ways to prevent unnecessary re-renders: PureComponent
for class components, and React.memo
for functional components.
In this tutorial, we’ll learn how to memoize components in React using React.PureComponent
and the React.memo
API. We’ll cover some of the fundamentals of React components before we dive into an example.
You can keep up with the changes and suggestions for the React framework on the React RFCs repository.
\\nLike most modern JavaScript frameworks, React is component-based. A component is usually defined as a function of its state and props.
\\nReact supports two types of components: class components and functional components. A functional component is a plain JavaScript function that returns JSX. A class component is a JavaScript class that extends React.Component
and returns JSX inside a render method.
The following code snippet shows a simple ReactHeader
component defined as both a class component and a functional component:
// CLASS COMPONENT\\nclass ReactHeader extends React.Component {\\n render() {\\n return (\\n <h1>\\n React {this.props.version || 17} Documentation\\n </h1>\\n )\\n }\\n}\\n\\n\\n// FUNCTIONAL COMPONENT\\nfunction ReactHeader(props) {\\n return (\\n <h1>\\n React {props.version || 17} Documentation\\n </h1>\\n )\\n}\\n\\n
Editor’s note: This post was updated by Chizaram Ken in March 2025 to compare and contrast the use of PureComponent
and the more modern React.memo
.
React components tend to re-render frequently during normal application usage. This behavior can occur when props change, state updates, or parent components re-render.
\\nWithout proper optimization, these re-renders can become unnecessary and impact performance — especially in large applications with complex component trees, and when components must handle frequent data updates.
\\nBased on the concept of purity in functional programming paradigms, a function is said to be pure if it meets the following two conditions:
\\nA React component is considered pure if it renders the same output for the same state and props. For this type of class component, React provides the PureComponent
base class. Class components that extend the React.PureComponent
class are treated as pure components.
Pure components have some performance improvements and render optimizations because React implements the shouldComponentUpdate()
method for them with a shallow comparison of props and state.
When a parent component re-renders, PureComponent
performs two key comparisons. It compares the current props with the next props, and compares the current state with the next state.
If neither props nor state has changed (based on shallow comparison), React skips the re-render process entirely. This automatic optimization helps prevent unnecessary renders and improves application performance. In practice, a React pure component looks like the following:
\\nimport React from \'react\';\\n\\nclass PercentageStat extends React.PureComponent {\\n\\n render() {\\n const { label, score = 0, total = Math.max(1, score) } = this.props;\\n\\n return (\\n <div>\\n <h6>{ label }</h6>\\n <span>{ Math.round(score / total * 100) }%</span>\\n </div>\\n )\\n }\\n\\n}\\n\\nexport default PercentageStat;\\n\\n
Functional components are very useful in React, especially when you want to isolate state management from the component. That’s why they are often called stateless components.
\\nHowever, functional components cannot leverage the performance improvements and render optimizations that come with React.PureComponent
because, by definition, they are not classes.
If you want React to treat a functional component as a pure component, you’ll have to convert the functional component to a class component that extends React.PureComponent
.
Check out the simple example below:
\\n// FUNCTIONAL COMPONENT\\nfunction PercentageStat({ label, score = 0, total = Math.max(1, score) }) {\\n return (\\n <div>\\n <h6>{ label }</h6>\\n <span>{ Math.round(score / total * 100) }%</span>\\n </div>\\n )\\n}\\n\\n\\n// CONVERTED TO PURE COMPONENT\\nclass PercentageStat extends React.PureComponent {\\n\\n render() {\\n const { label, score = 0, total = Math.max(1, score) } = this.props;\\n\\n return (\\n <div>\\n <h6>{ label }</h6>\\n <span>{ Math.round(score / total * 100) }%</span>\\n </div>\\n )\\n }\\n\\n}\\n\\n
React stateless function components are functions that do not manage any state. They are a simple way to define components that don’t need to manage state or lifecycle methods.
\\nIn essence, stateless function components are JavaScript functions that return React items after receiving props as input. Stateless functional components are used when a component doesn’t need to maintain its own state or lifecycle methods.
\\nTypically, these components have consistent output based on their inputs because they have no state or side effects.
\\n\\nIf you give a stateless function component a set of props, it will always render the same JSX. A simple example is:
\\nconst Title = ({ title }) => {\\n return <h1>{title}</h1>;\\n};\\n\\n
While functional components don’t have direct lifecycle methods, they still go through the same three phases as class components:
\\nuseEffect(() => {}, [])
This Hook is similar to componentDidMount
in class components. The function inside useEffect
runs after the component is first rendereduseEffect(() => {})
If you omit the dependency array ([]
), useEffect
will run after every render (similar to componentDidUpdate
)useEffect(() => { return () => {} })
The function returned inside useEffect
(the cleanup function) is equivalent to componentWillUnmount
in class components and is used to clean up resources when the component unmounts or before it re-rendersNote that useEffect
is not a direct equivalent to lifecycle methods, but rather a different paradigm for handling side effects in your components.
Function components are a simpler way to write components in React. They are JavaScript functions that accept props and return React elements as earlier said.
\\nHere’s a basic example:
\\nconst ProductCard = ({ name, price, description, inStock }) => {\\n return (\\n <div>\\n <h2>{name}</h2>\\n <p>{description}</p>\\n <span>${price}</span>\\n <p>{inStock ? \'In Stock\' : \'Out of Stock\'}</p>\\n </div>\\n );\\n};\\n\\n
The term “stateless function components,” has been outdated since the introduction of Hooks. Modern function components can handle a lot of things, including managing state using useState
, handling side effects using useEffect
, accessing context, and maintaining references with useRef
.
{ pure }
HOC from RecomposeIn the past, optimizing a functional component so that React could treat it as a pure component wasn’t going to necessarily require that you convert the component to a class component.
\\nThe Recompose package then provides a broad collection of higher-order components (HOCs) that are very useful for dealing with functional components. This package exports a { pure }
HOC that tries to optimize a React component by preventing updates on the component unless a prop has changed, using shallowEqual()
to test for changes.
Using the pure HOC, our functional component can be wrapped as follows:
\\nimport React from \'react\';\\nimport { pure } from \'recompose\';\\n\\nfunction PercentageStat({ label, score = 0, total = Math.max(1, score) }) {\\n return (\\n <div>\\n <h6>{ label }</h6>\\n <span>{ Math.round(score / total * 100) }%</span>\\n </div>\\n )\\n}\\n\\n// Wrap component using the `pure` HOC from recompose\\nexport default pure(PercentageStat);\\n\\n
However, the Recompose library is no longer a recommended approach to optimizing React components because it has been officially deprecated. Its functionality has been largely replaced by React Hooks, which effectively addresses the same issues.
\\nReact now provides us with React.memo
as the official way to optimize a functional component.
React.memo
Functional components in React can now leverage similar performance optimizations as PureComponent
through the use of React.memo
Hook. While functional components don’t inherently skip re-renders, they can be wrapped with memo
to achieve the same optimization.
With React.memo
, you can create memoized functional components that prevent unnecessary updates. This functionality is particularly useful when dealing with components that receive the same set of props.
Using the React.memo
API, the previous functional component can be wrapped as follows:
import React, { memo } from \'react\';\\n\\nfunction PercentageStat({ label, score = 0, total = Math.max(1, score) }) {\\n return (\\n <div>\\n <h6>{ label }</h6>\\n <span>{ Math.round(score / total * 100) }%</span>\\n </div>\\n )\\n}\\n\\n// Wrap component using `React.memo()`\\nexport default memo(PercentageStat);\\n\\n
It is important to note that, unlike PureComponent
, memo
only compares props. However, in functional components, calling the state setter with the same state already prevents re-renders by default, even without memo
.
React.memo
API implementation detailsThere are a few things worth considering about the implementation of the React.memo
API.
\\nFor one, React.memo
is a higher-order component. It takes a React component as its first argument and returns a special type of React component that allows the renderer to render the component while memoizing the output. Therefore, if the component’s props are shallowly equal, the React.memo
component will bail out the updates.
React.memo
works with all React components. The first argument passed to React.memo
can be any type of React component. However, for class components, you should use React.PureComponent
instead of React.memo
.
React.memo
also works with components rendered from the server using ReactDOMServer
.
The React.memo
API can take a second argument: the arePropsEqual()
function. The default behavior of React.memo
is to shallowly compare the component props. However, with the arePropsEqual()
function, you can customize the bailout condition for component updates. The arePropsEqual()
function is defined with two parameters: prevProps
and nextProps
.
The arePropsEqual()
function returns true
when the props are compared to be equal, thereby preventing the component from re-rendering,. It returns false
when the props are not equal.
The following code snippet uses a custom bailout condition:
\\nimport React, { memo } from \'react\';\\n\\nfunction PercentageStat({ label, score = 0, total = Math.max(1, score) }) {\\n return (\\n <div>\\n <h6>{ label }</h6>\\n <span>{ Math.round(score / total * 100) }%</span>\\n </div>\\n )\\n}\\n\\nfunction arePropsEqual(prevProps, nextProps) {\\n return prevProps.label === nextProps.label; \\n}\\n\\n// Wrap component using `React.memo()` and pass `arePropsEqual`\\nexport default memo(PercentageStat, arePropsEqual);\\n\\n
We use the strict equal operator ===
because we want to check the equality between the values and their types without conversion. For example, \\"1\\"
and 1
are not the same. Loose equality between them will return true, \\"1\\" == 1 // true
. But, strict equality will be false, \\"1\\"=== 1 // false
. So, we want to perform strict comparisons.
The arePropsEqual()
function acts very similar to the shouldComponentUpdate()
lifecycle method in class components. Note that arePropsEqual
works in the opposite way:
shouldComponentUpdate
— Returns true
to trigger a re-renderarePropsEqual
— Returns true
to prevent a re-renderPureComponent
class component to a functionIt is important to emphasize strongly that class components are no longer recommended in new code. Although React still supports class components, the recommended approach is to use functional components.
\\nHere’s how to convert a PureComponent
class to a modern function component using React.memo
.
PureComponent
version:
import { PureComponent } from \'react\';\\n\\nclass Greeting extends PureComponent {\\n render() {\\n console.log(\\"Greeting was rendered at\\", new Date().toLocaleTimeString());\\n return <h3>Hello{this.props.name && \', \'}{this.props.name}!</h3>;\\n }\\n}\\n\\n
Converted function component with memo
:
import { memo } from \'react\';\\n\\nconst Greeting = memo(function Greeting({ name }) {\\n console.log(\\"Greeting was rendered at\\", new Date().toLocaleTimeString());\\n return <h3>Hello{name && \', \'}{name}!</h3>;\\n});\\n\\n
The functional component version achieves the same optimization, but it’s more concise and follows modern React practices.
\\nPureComponent
and React.Memo
Below is a brief comparison of pureComponent
and React.Memo
:
Features | \\nPureComponent | \\nReact.memo | \\n
---|---|---|
State handling | \\nCompares both props and state | \\nOnly compares props; state changes are automatically optimized | \\n
Props access | \\nThrough this.props | \\nDirectly as function parameters | \\n
Import statement | \\nimport { PureComponent } from \'react\' | \\nimport { memo } from \'react\' | \\n
Component definition | \\nclass MyComponent extends PureComponent | \\nconst MyComponent = memo(function MyComponent) | \\n
Lifecycle methods | \\nUses class lifecycle methods | \\nUses hooks for lifecycle functionality | \\n
Syntax | \\nMore verbose; requires class syntax | \\nMore concise; uses function syntax | \\n
Performance optimization | \\nAutomatic shallow comparison | \\nCustomizable comparison through second argument | \\n
State declaration | \\nthis.state = { ... } | \\nUses useState Hook | \\n
Modern React alignment | \\nDepreacted approach | \\nRecommended modern approach | \\n
As developers, knowing when to use a tool is important. It either tells on your optimization or makes your code unnecessarily verbose. Having said this, I will briefly point out when to use memoization and when you really shouldn’t bother about it:
\\nWith React.memo
API, you can now enjoy the performance benefits that come from using functional components together with the optimizations that come with memoizing the components.
In this article, we covered the React.memo
API in detail. First, we covered the differences between functional and class components in React, and then we reviewed pure components, learned how to convert a functional component to a class component, and covered how to convert a class component to a functional component.
I hope you enjoyed this article. Be sure to leave a comment if you have any questions. Happy coding!
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nuuid
npm package\\n crypto.randomUUID()
method\\n Math.random()
for custom UUID generation\\n A Universally Unique Identifier, or UUID, is a 128-bit value generated by a computer system that has extremely low chances of repeating. According to the Internet Engineering Task Force (IETF) in RFC 4122, the first official specification for implementing UUIDs in computer systems, UUID protocol is defined as “A 128-bits-long identifier that can guarantee uniqueness across space and time.”
\\nToday, UUIDs are popular among developers because they are a reliable and convenient way to label data distinctly. There are also different versions of UUID that the IETF has released specs for. At the time of writing, the latest UUID version is v8.
\\nAccording to UUID specs, the 128 bits that make up a UUID have different segments that serve different purposes. However, the segmentation and their purposes depend on the UUID version. For example, v1, v2, and v6 have a 48-bit segment representing the MAC address of the computer system that generated the UUID. They also have a 60-bit segment for a timestamp and a 13 or 14-bit segment for a “uniqifying” clock sequence. In another example, v4 UUIDs contain a 122-bit segment of randomly generated values.
\\nMore commonly, UUIDs are presented in a format that looks like this: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
. Here, x
is a hexadecimal value that represents four bits. A UUID is usually a string with an 8-4-4-4-12 format (32 hex values and four hyphens). Here is an example:
b592d358-adf0-4782-b90e-d8ee4118ddcd\\n\\n
Most operating systems have a CLI tool for generating UUIDs:
\\nuuidgen // generates a UUID\\nuuidgen help // view help for the UUID command.\\n\\n
The uuidgen
command is available on Windows, Linux, and macOS systems to generate UUIDs (mostly v4) on the command line or terminal.
The 128-bit UUID value is typically represented as a 36-character string when including hyphens, or 32 characters without hyphens. The standard format consists of 32 hexadecimal characters grouped into five sections, separated by hyphens. Here’s an example:
\\n550e8400-e29b-41d4-a716-446655440000
(36 characters)550e8400e29b41d4a716446655440000
(32 characters)The length of a UUID is directly tied to its uniqueness. A 128-bit UUID provides 2^128 (approximately 3.4 x 10^38) possible unique values, making the probability of a collision (two UUIDs being the same) extremely low. For example, if you generate 1 billion UUIDs per second, it would take 86 years to reach a 50% chance of a collision.
\\nUUID collisions are detrimental where identifiers have to be unique — for example, where the UUIDs are the primary keys in a database table.
\\nAs mentioned above, the standard length of generated UUIDs is 128 bits. However, you can shorten a UUID for several purposes. Shortening UUIDs comes with tradeoffs, which this article will also discuss.
\\nEditor’s note: This article was last updated by Ikeh Akinyemi in March 2025.
\\nGenerating UUIDs in Node.js is a common task, especially in distributed systems, databases, and applications requiring high levels of uniqueness. However, there are multiple ways to generate UUIDs in Node.js, each with its own strengths and weaknesses. In this section, we’ll compare four popular methods for generating UUIDs:
\\nuuid
npm packagecrypto.randomUUID()
methodMath.random()
for custom UUID generationBy the end of this section, you’ll have a clear understanding of which method is best suited for your specific use case.
\\nuuid
npm packageThe uuid
package is one of the most popular libraries for generating UUIDs in Node.js. It supports multiple UUID versions (v1 to v6) and provides a simple API for generating cryptographically secure UUIDs:
npm install uuid\\n
import { v4 as uuidv4 } from \'uuid\';\\nconst uuid = uuidv4();\\nconsole.log(uuid); // Example output: \'1b9d6bcd-bbfd-4b2d-9b5d-ab8dfbbd4bed\'\\n\\n
The uuid
package is ideal for projects that require support for multiple UUID versions (v1, v3, v4, v5, and v6) or need advanced features like UUID validation and custom formatting. If your application demands flexibility and cross-platform compatibility, this package is a strong choice.
Buffer
crypto.randomUUID()
methodNode.js introduced the crypto.randomUUID()
method in version 14.17.0, providing a built-in way to generate UUIDs without external dependencies. The crypto.randomUUID()
method generates a UUID v4, which is a randomly generated 128-bit identifier.
Unlike other UUID versions (e.g., v1, which includes a timestamp and MAC address), UUID v4 is entirely random, making it ideal for most use cases where uniqueness and security are paramount:
\\nimport crypto from \'node:crypto\';\\nconst uuid = crypto.randomUUID();\\nconsole.log(uuid); // Example output: \'550e8400-e29b-41d4-a716-446655440000\'\\n\\n
The built-in crypto.randomUUID()
method is perfect for developers who want a simple, secure, and dependency-free way to generate UUID v4 identifiers. If your application only requires UUID v4 and doesn’t need advanced customization, this method is an excellent choice.
uuid
packageMath.random()
for custom UUID generationWhile not recommended for most use cases, you can generate UUIDs using Math.random()
for simple, non-cryptographic purposes. This method is generally less secure and should be used with caution:
function generateCustomUUID() {\\n return \'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx\'.replace(/[xy]/g, function(c) {\\n const r = Math.random() * 16 | 0;\\n const v = c === \'x\' ? r : (r & 0x3 | 0x8);\\n return v.toString(16);\\n });\\n}\\n\\nconst uuid = generateCustomUUID();\\nconsole.log(uuid); // Example output: \'e4e3f8d0-4b2d-4b5d-9b5d-ab8dfbbd4bed\'\\n\\n
Using Math.random()
for UUID generation is best suited for non-critical, internal use cases where security is not a priority. If you need a quick and lightweight solution for generating unique IDs without external dependencies, this approach can work, but it should be used with caution due to its lack of cryptographic security.
In addition to the traditional UUID generation methods, there are several alternative libraries that offer unique features for generating unique identifiers. These libraries are particularly useful when you need shorter IDs, enhanced security, or URL-friendly formats.
\\nshort-uuid
packageThe short-uuid
package allows you to generate and translate RFC 4122 v4-compliant UUIDs into shorter formats. This is particularly useful for applications where shorter IDs are preferred, such as in URLs or database storage:
npm install short-uuid\\n
import short from \'short-uuid\';\\n\\nconst translator = short();\\nconst shortId = translator.generate();\\nconsole.log(shortId); // Example output: \'2gYx5fZb\'\\n\\nconst originalUUID = translator.toUUID(shortId);\\nconsole.log(originalUUID); // Converts back to the original UUID\\n\\n
The short-uuid
package is ideal for applications where compact IDs are beneficial; this package provides a great balance between readability and efficiency.
cuid2
packageThe cuid2
package is designed to generate secure, collision-resistant IDs that are URL-friendly and do not leak sensitive information. It is an enhanced version of the original cuid
package, with improved security features:
npm install @paralleldrive/cuid2\\n
import { createId } from \'@paralleldrive/cuid2\';\\n\\nconst id = createId();\\nconsole.log(id); // Example output: \'hgyoie0b9qb1fyso09hjyoef\'\\n\\n
The cuid2
package is a strong choice for where security and collision resistance are critical. If you need outputs that are highly unpredictable, cuid2
provides a secure and reliable solution.
nanoid
packageThe nanoid
package is a lightweight library for generating random, URL-friendly IDs. It is particularly popular for generating short IDs for use in URLs, verification codes, and database primary keys:
npm install nanoid\\n
import { nanoid } from \'nanoid\';\\n\\nconst id = nanoid();\\nconsole.log(id); // Example output: \'e4D3gvwajLcsYhdgOXK6B\'\\n\\n
For advanced use cases, you can create custom UUIDs using hash functions like SHA-256. This approach allows you to generate UUIDs based on specific input data, ensuring uniqueness while maintaining control over the generation process:
\\nimport crypto from \'node:crypto\';\\n\\nfunction generateHashBasedUUID(input) {\\n const hash = crypto.createHash(\'sha256\').update(input).digest(\'hex\');\\n return `${hash.slice(0, 8)}-${hash.slice(8, 12)}-${hash.slice(12, 16)}-${hash.slice(16, 20)}-${hash.slice(20, 32)}`;\\n}\\n\\nconst uuid = generateHashBasedUUID(\'custom-input-data\');\\nconsole.log(uuid); // Example output: \'5a105e8b-9d5c-4b2d-9b5d-ab8dfbbd4bed\'\\n\\n
Custom hash-based UUIDs are ideal for advanced use cases where you need deterministic UUIDs based on specific input data, providing flexibility where it is needed.
\\nMethod | \\nUUID versions supported | \\nCryptographically secure | \\nExternal dependency | \\nCustomization options | \\nUse case highlights | \\n
---|---|---|---|---|---|
uuid npm package | \\nv1, v3, v4, v5, v6 | \\n✅ | \\n✅ | \\nHigh | \\nMultiple UUID versions, validation | \\n
crypto.randomUUID() | \\nv4 | \\n✅ | \\n❌ | \\nLow | \\nBuilt-in, no dependencies | \\n
Math.random() | \\nCustom | \\n❌ | \\n❌ | \\nMedium | \\nSimple, non-secure use cases | \\n
short-uuid | \\nv4 | \\n✅ | \\n✅ | \\nMedium | \\nShorter IDs, URL-friendly | \\n
cuid2 | \\nCustom | \\n✅ | \\n✅ | \\nHigh | \\nSecure, collision-resistant, URL-friendly | \\n
nanoid | \\nCustom | \\n✅ | \\n✅ | \\nHigh | \\nShort, URL-friendly, lightweight | \\n
Custom hash-based UUIDs | \\nCustom | \\nDepends on implementation | \\n❌ | \\nHigh | \\nDeterministic, input-based UUIDs | \\n
UUIDs are popular for identifying data stored in a database table and are often effective for this purpose. However, due to the way some database engines work or the anatomy of a UUID, they are not the best for every application or scenario. This section will cover some important drawbacks of using UUIDs in these contexts.
\\n\\nSome database engines like PostgreSQL, MySQL, and SQLite use the B+ tree data structure for indexing. If a developer implements UUIDs as a primary key, the database will likely spend more time re-balancing the B+ tree index whenever an insertion occurs. This is because generated UUIDs are not sequential (compared to incremental integers, for example). B+ tree indexes are better optimized for sequential IDs.
\\nUsing UUIDs can hurt performance and lead to slower writes in very large-scale databases. To combat this effect, alternative sequential IDs (e.g., incremental integers or Snowflake IDs) can be used instead.
\\nIn its essence, a UUID is just an encoded collection of data — like a timestamp, a random number, sometimes a MAC address, etc. — that can be swiftly decoded to reveal its contents to malicious actors. By decoding a UUID, one can figure out when a database record was created, guess another valid ID, or reveal the MAC address of the computer that created the record.
\\nThere are two options for web applications that need to guard this kind of data. They either must make sure to never expose a UUID to the client (e.g., frontend, mobile app), or they use more secure ID generators that do not leak data (e.g., using the cuid2
or nanoid
package).
While UUIDs are a common choice for generating unique identifiers in Node.js applications, they aren’t always the best option for every scenario. Though UUIDs offer globally unique IDs, their potential for performance drawbacks, especially for applications that require enhanced security, can be a concern. Consider using UUID alternatives like cuid2
and nanoid
, which provide more secure and unpredictable IDs that are less likely to leak sensitive information.
Would you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\ncursor
property?\\n Cursors can either limit or greatly enhance the way your users experience your site.
\\nIn this tutorial, we’ll discuss built-in CSS cursors, and look at how to create custom cursors using CSS (and a bit of JavaScript) to make your website feel more fun and memorable.
\\nWe’ll also tackle the benefits and challenges of using CSS vs. JavaScript for custom cursors, the right scenarios to go beyond default options, and accessibility factors. Basic knowledge of HTML, CSS, and JavaScript will be helpful for following along.
\\ncursor
property?The cursor
property in CSS defines the type of mouse pointer displayed over an element. These predefined cursors are super handy for showing users what they can do in different parts of your site, such as clicking a link, selecting text, dragging an item, or resizing a window.
You can either use a predefined cursor type or even load a custom icon for a unique touch. In most cases, the built-in options are more than enough to cover common interactions, but custom cursors are a cool way to add your own touch to the site.
\\nThe basic syntax is as follows:
\\nselector {\\n cursor: value;\\n}\\n\\n
The default value is auto
, meaning the browser sets a cursor based on the context.
Editor’s note: This article was last updated by Saleh Mubashar in March 2025 to provide more comprehensive coverage of cursor references, include a full reference guide for all cursor
values, and provide more targeted advice on building custom cursors.
Before we get into custom cursors, let’s have a look at all the available cursor options in CSS and their common uses:
\\nCursor value | \\nDescription | \\n
---|---|
alias | \\nAn alias or shortcut can be created | \\n
all-scroll | \\nScroll in any direction | \\n
auto | \\nDefault value – the browser pick a cursor | \\n
cell | \\nSelect a table cell | \\n
col-resize | \\nResize columns | \\n
context-menu | \\nOpens a menu | \\n
copy | \\nCopy an item | \\n
crosshair | \\nCross cursor indicating precise selection | \\n
default | \\nStandard cursor | \\n
e-resize / w-resize | \\nResize to the right / left | \\n
grab | \\nDrag an item | \\n
grabbing | \\nItem is being dragged | \\n
help | \\nHelp info is available | \\n
move | \\nAn item can be moved | \\n
n-resize / s-resize | \\nResize upwards/downwards | \\n
ne-resize / nesw-resize / sw-resize | \\nResize top right diagonally | \\n
no-drop | \\nCan’t drop an item | \\n
none | \\nHidden cursor | \\n
not-allowed | \\nAction not allowed | \\n
nw-resize / nwse-resize / se-resize | \\nResize top left diagonally | \\n
pointer | \\nClickable item | \\n
progress | \\nLoading but interactive | \\n
row-resize | \\nResize rows | \\n
text | \\nSelect text | \\n
vertical-text | \\nSelect vertical text | \\n
wait | \\nLoading, not interactive | \\n
zoom-in / zoom-out | \\nZoom in / zoom out | \\n
Hover over the boxes below to see the cursors in action:
\\nSee the Pen
\\nUntitled by Samson Omojola (@Caesar222)
\\non CodePen.
Check out the complete list of CSS cursors here.
\\nWhile these cursors are useful and have some basic styling, we can certainly get more creative with custom cursors.
\\nCreating a custom cursor with CSS is a pretty straightforward process. The first step is to find the image you want to use to replace the default cursor. You can either design one yourself or get a free PNG that suits your needs from an icon library such as FontAwesome.
\\nNext, to create the custom cursor, use the cursor
property with the url()
function. We will pass the image location to the cursor using the url
function:
body {\\n cursor: url(\'path-to-image.png\'), auto;\\n}\\n\\n
To ensure that this cursor is used on all parts of your website, the best place to use the cursor
property is in the body
tag of your HTML. However, if you want, you can assign custom cursors to specific elements instead of the whole website.
You can also add a fallback
value to your cursor
property. When using custom CSS properties, this value ensures that if the image that serves as your custom property is missing or cannot be loaded, then your users will have another option.
In this case, auto
is the fallback
descriptor for your custom cursor
property. Your users will see the regular cursor if the custom one is unavailable.
You can also provide more than one custom cursor (multiple fallbacks) for your website. All you have to do is add their paths to the cursor
property:
body {\\n cursor: url(\'path-to-image.png\'), url(\'path-to-image-2.svg\'), url(\'path-to-image-3.jpeg\'), auto;\\n}\\n\\n
There are three fallback cursors in the code above.
\\n\\nBecause they draw attention to elements you want to highlight on your website, custom cursors are best used in specific scenarios, such as:
\\nA few tips to keep in mind while creating custom cursors include:
\\n.png
or .svg
images for transparencySay you have a table and you’d like the mouse cursor to change to a pointer (i.e., the hand icon) whenever a user hovers over a row in the table. You can use the CSS cursor
property to achieve this.
Here’s an example:
\\n<style>\\n /* Style the table */\\n table {\\n font-family: arial, sans-serif;\\n border-collapse: collapse;\\n width: 100%;\\n }\\n\\n /* Style the table cells */\\n td, th {\\n border: 1px solid #dddddd;\\n text-align: left;\\n padding: 8px;\\n }\\n\\n /* Style the table rows */\\n tr:hover {\\n cursor: pointer;\\n }\\n</style>\\n\\n<table>\\n <tr>\\n <th>Name</th>\\n <th>Age</th>\\n <th>City</th>\\n </tr>\\n <tr>\\n <td>John</td>\\n <td>30</td>\\n <td>New York</td>\\n </tr>\\n <tr>\\n <td>Jane</td>\\n <td>25</td>\\n <td>Chicago</td>\\n </tr>\\n <tr>\\n <td>Bill</td>\\n <td>35</td>\\n <td>Los Angeles</td>\\n </tr>\\n</table>\\n\\n
In the above code, we use the tr:hover
selector to apply the cursor
property to all table rows when the mouse hovers over them. The cursor
property is set to pointer
, which changes the mouse cursor to a hand icon.
To hide the mouse cursor with CSS, you can use the cursor
property and set its value to none
.
Here’s an example:
\\n<style>\\n /* Style the body element */\\n body {\\n cursor: none;\\n }\\n</style>\\n\\n<body>\\n <!-- Your content goes here --\x3e\\n</body>\\n\\n
This will hide the mouse cursor throughout the entire webpage. If you only want to hide the mouse cursor for a specific element, you can apply the cursor
property to that individual element instead of the body
element.
There are several situations in which hiding the mouse cursor might be useful, such as:
\\nRemember that hiding the mouse cursor can be confusing or disorienting for some users, depending on the use case. This strategy should be used carefully and only when necessary.
\\nWhile custom cursors can be created using CSS, JavaScript offers additional advantages. Before we discuss that, let’s look at the advantages and disadvantages of creating custom cursors with CSS and JavaScript.
\\nThere are numerous reasons why it is preferable to create cursors with CSS:
\\nThe primary drawback of using CSS for custom cursors is the limited ability to add animations or advanced customizations.
\\n\\nThis is where JavaScript comes in. JavaScript allows for more advanced interactions when users engage with the cursor—for example, hovering, clicking, or moving over specific elements. By listening to specific events, the cursor’s movements can then be updated and also be easily animated.
\\nCreating a custom cursor with JavaScript involves manipulating DOM elements. We’ll create some DOM elements, which will serve as our custom cursor, and then use JavaScript to manipulate them. Then, as we move our cursor around, those custom elements will move around as our cursor.
\\nInstead of using or downloading an image, we’ll design an animated cursor using CSS to make it more engaging. Move your cursor around the box below to see an example:
\\nSee the Pen
\\nUntitled by Samson Omojola (@Caesar222)
\\non CodePen.
As you can see, the cursor consists of two elements: a large circle and a small circle. We’ll create two div
elements and assign them class names:
<div class=\\"cursor small\\"></div>\\n<div class=\\"cursor big\\"><div>\\n\\n
Next, we’ll style the circles using CSS. The big circle will have a width and height of 50px
and will be shaped into a circle using border-radius: 50%
.
The small circle will be hollow, so we’ll define a border with a border-radius
of 50%
and set its width and height to 6px
each. We also disable the default cursor by setting cursor: none
so that our custom cursor can take its place.
To animate the big circle, we’ll use @keyframes
. The animation lasts 2s
, starting with a background-color
of green and an opacity of 0.2
. At the midpoint, the color changes to orange, and by the end, it turns red. We set animation-iteration-count
to infinite
to make the animation loop continuously:
body {\\n background-color: #171717;\\n cursor: none;\\n height: 120vh;\\n}\\n\\n.small {\\n width: 6px;\\n height: 6px;\\n border: 2px solid #fff;\\n border-radius: 50%;\\n}\\n\\n.big {\\n width: 50px;\\n height: 50px;\\n border-radius: 50%;\\n animation-name: stretch;\\n animation-duration: 2s;\\n animation-timing-function: ease-out;\\n animation-direction: alternate;\\n animation-iteration-count: infinite;\\n}\\n\\n@keyframes stretch {\\n 0% {\\n opacity: 0.2;\\n background-color: green;\\n border-radius: 100%;\\n }\\n 50% {\\n background-color: orange;\\n }\\n 100% {\\n background-color: red;\\n }\\n}\\n\\n
Now, to make the elements follow the mouse movement, we’ll use JavaScript. The script below listens for mouse movement on the webpage. When the user moves their mouse, the function retrieves the x
and y
coordinates and updates the position of both div
elements accordingly:
const cursorSmall = document.querySelector(\'.small\');\\nconst cursorBig = document.querySelector(\'.big\');\\n\\nconst positionElement = (e) => {\\n const mouseX = e.clientX;\\n const mouseY = e.clientY;\\n\\n cursorSmall.style.transform = `translate3d(${mouseX}px, ${mouseY}px, 0)`;\\n cursorBig.style.transform = `translate3d(${mouseX}px, ${mouseY}px, 0)`;\\n};\\n\\nwindow.addEventListener(\'mousemove\', positionElement);\\n\\n
See the complete code alongside the interactive cursor in the below CodePen:
\\nSee the Pen
\\nUntitled by Samson Omojola (@Caesar222)
\\non CodePen.
Here’s how it works:
\\nquerySelector
to access the two div
elementspositionElement
function retrieves the current mouse x
and y
coordinatestransform: translate3d()
property for both cursor elements, moving them accordinglytransform
repositions elements in both horizontal and vertical directions, while translate3d
adjusts their position in 3D spaceCustom cursors can make a website feel unique, but they can also be annoying or distracting if overused. Many people find them frustrating, especially if they make navigation harder. A cursor should help users, not get in their way.
\\nBefore adding a custom cursor, ask yourself if it actually improves the experience or if it’s just for looks. Also, keep in mind that not all browsers support fancy cursor effects, especially older ones. Here’s the browser compatibility data for the cursor
property:
To keep things user-friendly, use custom cursors sparingly and make sure they fit the design. If possible, give users the option to turn them off so they can stick with the default system cursor if they want.
\\nCustom cursors might seem like a fun way to personalize a website, but they can cause serious accessibility issues. Many people rely on built-in OS features to modify their cursors, such as increasing size or using high-contrast colors. These changes help users with low vision or motor impairments navigate their devices more easily.
\\nWhen a website overrides these modifications with a custom CSS cursor, it can make the experience frustrating—or even unusable—for some users.
\\nIf you must use a custom cursor, make sure to:
\\nprefers-reduced-motion
to disable custom cursors for users who find them distracting:@media (prefers-reduced-motion: reduce) {\\n *{\\ncursor: auto; /* Reverts to the default cursor */\\n }\\n}\\n
aria-hidden=\\"true\\"
to the cursor elements to prevent them from being picked upAt the end of the day, a cursor should enhance usability, not get in the way. If there’s any chance a custom cursor could make a website harder to use, it’s best to avoid it altogether. I would also suggest reading this excellent article by Eric Bailey on the drawbacks of custom cursors. He makes a bunch of really good points.
\\nIn this tutorial, we discussed built-in CSS cursors, creating custom cursors with CSS, using multiple cursors, and adding animations with CSS and JavaScript. We also covered the pros and cons of using CSS vs. JavaScript for custom cursors, when to go beyond default options and accessibility factors to keep in mind.
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nNativeGetDeviceInfoModule
\\n Turbo Native Modules are a relatively new addition to React Native’s architecture. They are modified and optimized approaches for creating native modules, improving performance and allowing for easier integration into modern React Native apps. In the previous React Native architecture, they were called native modules.
\\nIn this tutorial, we will explore these modules and their role in React Native as we build a custom Turbo Native Module for Android. The custom module will allow our React Native app to access native mobile APIs to retrieve information like device model, IP address, uptime, battery status, battery level, and the Android version.
\\nTo follow along with this tutorial, you should have:
\\nTurbo Native Modules are the current stage of the native module transformation with a few extra benefits added to improve performance in React Native. This transformation replaced the asynchronous bridge with JSI to resolve the performance issues during the communication between JavaScript and platform-native code.
\\nThe Turbo Native Module architecture is implemented in C++, which offers the following benefits:
\\nTo better understand how these modules improve performance in React Native apps, you need to understand the following keywords in the React Native architecture:
\\nThe asynchronous bridge is the primary communication medium between the native platforms (iOS and Android) and JavaScript in the old architecture. Here’s how it worked:
\\nJSI is an interface that allows JavaScript and C++ to share memory references, enabling direct communication between JavaScript and native platforms without serialization costs. It calls native methods (C++, Objective-C, or Java) directly from the JavaScript engine, and allows access to databases and other complex instance-based types.
\\nCodegen is a tool that automates the creation of boilerplate code that connects the JavaScript engine to the Turbo Native Modules. It reduces cross-boundary type errors (one of the most common sources of crashes in cross-platform apps) when creating Native Modules while providing a consistent way to handle communication between JavaScript and native platform code.
\\nFabric rendering architecture is React Native’s new rendering system that works with Native Modules and JSI to enhance rendering performance by reducing unnecessary updates. It supports both asynchronous and synchronous updates.
\\nThe Native Module architecture supports modules written in C++. While Native Modules allow you to write native iOS platform code with Swift or Objective C, and native Android platform code with Java or Kotlin, C++ Turbo Modules let you write your module in C++ and it works across all platforms, including Android, iOS, Windows, and macOS.
\\nIf your app requires more performance optimizations and fine-grained memory management, you should consider using C++ Turbo Native Modules.
\\nThis section will show you how to write a custom Turbo Module to allow our React Native app to access Native Android APIs to get info such as device model, IP address, uptime, battery status, battery level, and the Android version.
\\nTo make this work, we need to use the following Android APIs:
\\nTo set up your React Native project, run the following command:
\\nnpx @react-native-community/cli@latest init SampleApp --version 0.76.0\\n\\n
This will download a template for the project and install the dependencies using npm. To avoid build issues, delete the node modules and run yarn install
to reinstall the dependencies using Yarn.
Run the following command to start your project:
\\nnpm run start\\n\\n
Then press A to run on Android. You may encounter the following error in the Project
root directory.
Error: SDK location not found. Define location with sdk.dir in the local.properties file or with an ANDROID_HOME environment variable.\\n
If you do, navigate to the Android
directory and create a file named local.properties
. Open the file and paste your Android SDK path like below:
For Mac:
\\nsdk.dir = /Users/USERNAME/Library/Android/sdk\\n\\n
For Windows:
\\nsdk.dir=C:\\\\\\\\Users\\\\\\\\UserName\\\\\\\\AppData\\\\\\\\Local\\\\\\\\Android\\\\\\\\sdk\\n\\n
To implement a Turbo Module, you need to define a typed JavaScript specification using TypeScript. This specification declares the data types and methods used in your native platform’s code.
\\nIn your project’s root directory, create a spec folder with a file named NativeGetDeviceInfo
and add the following:
import type {TurboModule} from \'react-native\';\\nimport {TurboModuleRegistry} from \'react-native\';\\nexport interface Spec extends TurboModule {\\n getDeviceModel(): Promise<string>;\\n getDeviceIpAddress(): Promise<string>;\\n getDeviceUptime(): Promise<string>;\\n getBatteryStatus(): Promise<string>;\\n getBatteryLevel(): Promise<string>;\\n getAndroidVersion(): Promise<string>;\\n}\\nexport default TurboModuleRegistry.getEnforcing<Spec>(\\n \'NativeGetDeviceInfo\',\\n);\\n\\n
Here, we’ve defined a TypeScript interface and module that interacts with native code to fetch various device-related information.
\\nThe getDeviceModel
method fetches the device’s model, such as “Samsung Galaxy S21,” while getDeviceIpAddress
retrieves the current IP address of the device. For tracking system activity, getDeviceUptime
provides the duration since the device was last booted.
Battery-related details can be accessed using getBatteryStatus
, which indicates whether the device is charging or discharging, and getBatteryLevel
, which returns the current battery level as a percentage. Lastly, the getAndroidVersion
method retrieves the Android operating system version, specifically for Android devices.
Next, we’ll configure the Codegen tools to use the typed specifications to generate platform-specific interfaces and boilerplate. To do this, update your package.json
to include the following:
\\"dependencies\\": {\\n ...\\n},\\n\\"codegenConfig\\": {\\n \\"name\\": \\"NativeGetDeviceInfoSpec\\",\\n \\"type\\": \\"modules\\",\\n \\"jsSrcsDir\\": \\"specs\\",\\n \\"android\\": {\\n \\"javaPackageName\\": \\"com.nativegetdeviceinfo\\"\\n }\\n}\\n\\n
Now, run the following command to generate the boilerplate code using the typed specifications:
\\ncd android\\n./gradlew generateCodegenArtifactsFromSchema\\n\\n
You should see the following result if successful:
\\nBUILD SUCCESSFUL in 5s\\n15 actionable tasks: 3 executed, 12 up-to-date\\n\\n
In your project root directory, navigate to the android/app/src/main/java/com
directory and create a folder named nativegetdeviceinfo
. Inside the folder, create a file named NativeGetDeviceInfoModule.kt
and add the following:
package com.nativegetdeviceinfo\\n\\nimport android.content.Context\\nimport android.os.BatteryManager\\nimport android.os.Build\\nimport android.os.SystemClock\\nimport android.net.wifi.WifiManager\\nimport android.net.ConnectivityManager\\nimport android.net.NetworkCapabilities\\nimport android.text.format.Formatter\\nimport com.facebook.react.bridge.Promise\\nimport com.facebook.react.bridge.ReactApplicationContext\\nimport com.nativegetdeviceinfo.NativeGetDeviceInfoSpec\\n\\nclass NativeGetDeviceInfoModule(reactContext: ReactApplicationContext) : NativeGetDeviceInfoSpec(reactContext) {\\n\\n\\n}\\n\\n
Next, implement the generated NativeGetDeviceInfoSpec
interface.
We’ll start with the implementation of the getDeviceModel()
method:
class NativeGetDeviceInfoModule(reactContext: ReactApplicationContext) : NativeGetDeviceInfoSpec(reactContext) {\\n override fun getName() = NAME\\n\\n // Get device model\\n override fun getDeviceModel(promise: Promise) {\\n val manufacturer = Build.MANUFACTURER\\n val model = Build.MODEL\\n promise.resolve(\\"$manufacturer $model\\")\\n }\\n}\\n\\n
The NativeGetDeviceInfoSpec
class defines the structure and interface for the Native Module. The getName
method sets the name by which the module is recognized in JavaScript. The getDeviceModel
method fetches the device’s model information and returns it as a string to the JavaScript layer using a Promise
.
class NativeGetDeviceInfoModule(reactContext: ReactApplicationContext) : NativeGetDeviceInfoSpec(reactContext) {\\n ...\\n // Get device IP address\\n override fun getDeviceIpAddress(promise: Promise) {\\n try {\\n val connectivityManager = getReactApplicationContext().getSystemService(Context.CONNECTIVITY_SERVICE) as ConnectivityManager\\n val network = connectivityManager.activeNetwork\\n val networkCapabilities = connectivityManager.getNetworkCapabilities(network)\\n\\n val ipAddress = when {\\n networkCapabilities?.hasTransport(NetworkCapabilities.TRANSPORT_WIFI) == true -> {\\n val wifiManager = getReactApplicationContext().getSystemService(Context.WIFI_SERVICE) as WifiManager\\n val wifiInfo = wifiManager.connectionInfo\\n Formatter.formatIpAddress(wifiInfo.ipAddress)\\n }\\n networkCapabilities?.hasTransport(NetworkCapabilities.TRANSPORT_CELLULAR) == true -> \\"Cellular network IP unavailable\\"\\n else -> \\"Unknown\\"\\n }\\n promise.resolve(ipAddress)\\n } catch (e: Exception) {\\n promise.reject(\\"IP_ERROR\\", \\"Unable to retrieve IP address: ${e.message}\\")\\n }\\n }\\n}\\n\\n
The getDeviceIpAddress
function retrieves the device’s current IP address and communicates it to the JavaScript layer using a Promise
. It uses the ConnectivityManager
to check the active network and its capabilities. If the device is connected to Wi-Fi, it fetches the IP address from the WifiManager
. For cellular connections, it returns a placeholder message since direct retrieval of the cellular IP is not straightforward, and for other cases, it returns Unknown
:
class NativeGetDeviceInfoModule(reactContext: ReactApplicationContext) : NativeGetDeviceInfoSpec(reactContext) {\\n ...\\n // Get device uptime\\n override fun getDeviceUptime(promise: Promise) {\\n val uptimeMillis = SystemClock.uptimeMillis() // Device uptime in milliseconds\\n val uptimeSeconds = uptimeMillis / 1000\\n val hours = uptimeSeconds / 3600\\n val minutes = (uptimeSeconds % 3600) / 60\\n val seconds = uptimeSeconds % 60\\n promise.resolve(\\"$hours hours, $minutes minutes, $seconds seconds\\")\\n }\\n}\\n\\n
The getDeviceUptime
function calculates how long the device has been running since its last boot and sends this information to the JavaScript layer as a human-readable string using a Promise
. It retrieves the uptime in milliseconds using SystemClock.uptimeMillis()
and converts it into seconds, hours, and minutes.
class NativeGetDeviceInfoModule(reactContext: ReactApplicationContext) : NativeGetDeviceInfoSpec(reactContext) {\\n ...\\n // Get battery status\\n override fun getBatteryStatus(promise: Promise) {\\n try {\\n val batteryManager = getReactApplicationContext().getSystemService(Context.BATTERY_SERVICE) as BatteryManager\\n val isCharging = batteryManager.isCharging\\n promise.resolve(if (isCharging) \\"Charging\\" else \\"Not Charging\\")\\n } catch (e: Exception) {\\n promise.reject(\\"BATTERY_STATUS_ERROR\\", \\"Unable to retrieve battery status: ${e.message}\\")\\n }\\n }\\n}\\n\\n
The getBatteryStatus
function checks the current charging status of the device and communicates it to the JavaScript layer using a Promise
. It uses the BatteryManager
system service to determine if the device is charging. If the device is charging, it resolves the Promise
with the string Charging
. Otherwise, it resolves with Not Charging
:
class NativeGetDeviceInfoModule(reactContext: ReactApplicationContext) : NativeGetDeviceInfoSpec(reactContext) {\\n ...\\n // Get battery level\\n override fun getBatteryLevel(promise: Promise) {\\n try {\\n val batteryManager = getReactApplicationContext().getSystemService(Context.BATTERY_SERVICE) as BatteryManager\\n val level = batteryManager.getIntProperty(BatteryManager.BATTERY_PROPERTY_CAPACITY)\\n promise.resolve(\\"$level%\\")\\n } catch (e: Exception) {\\n promise.reject(\\"BATTERY_LEVEL_ERROR\\", \\"Unable to retrieve battery level: ${e.message}\\")\\n }\\n }\\n}\\n\\n
The getBatteryLevel
function retrieves the device’s current battery level as a percentage and sends it to the JavaScript layer using a Promise
. It accesses the BatteryManager
system service and uses the getIntProperty
method with BATTERY_PROPERTY_CAPACITY
to fetch the battery level. If successful, it resolves the Promise
with the battery percentage as a string:
class NativeGetDeviceInfoModule(reactContext: ReactApplicationContext) : NativeGetDeviceInfoSpec(reactContext) {\\n ...\\n // Get Android version\\n override fun getAndroidVersion(promise: Promise) {\\n val androidVersion = Build.VERSION.RELEASE\\n promise.resolve(\\"Android $androidVersion\\")\\n }\\n\\n companion object {\\n const val NAME = \\"NativeGetDeviceInfo\\"\\n }\\n}\\n\\n
The getAndroidVersion
function retrieves the Android operating system version running on the device and sends it to the JavaScript layer using a Promise
. It accesses the version information from Build.VERSION.RELEASE
and resolves the Promise
with the version formatted as a string.
NativeGetDeviceInfoModule
Next, we need to package the NativeGetDeviceInfoModule
and register it in the React Native runtime, by wrapping it as a Base Native Package.
Create a file named NativeGetDeviceInfoPackage.kt
in the nativegetdeviceinfo
folder and add the following:
package com.nativegetdeviceinfo\\n\\nimport com.facebook.react.TurboReactPackage\\nimport com.facebook.react.bridge.NativeModule\\nimport com.facebook.react.bridge.ReactApplicationContext\\nimport com.facebook.react.module.model.ReactModuleInfo\\nimport com.facebook.react.module.model.ReactModuleInfoProvider\\n\\nclass NativeGetDeviceInfoPackage : TurboReactPackage() {\\n\\n override fun getModule(name: String, reactContext: ReactApplicationContext): NativeModule? =\\n if (name == NativeGetDeviceInfoModule.NAME) {\\n NativeGetDeviceInfoModule(reactContext)\\n } else {\\n null\\n }\\n\\n override fun getReactModuleInfoProvider() = ReactModuleInfoProvider {\\n mapOf(\\n NativeGetDeviceInfoModule.NAME to ReactModuleInfo(\\n _name = NativeGetDeviceInfoModule.NAME,\\n _className = NativeGetDeviceInfoModule.NAME,\\n _canOverrideExistingModule = false,\\n _needsEagerInit = false,\\n isCxxModule = false,\\n isTurboModule = true\\n )\\n )\\n }\\n}\\n\\n
The NativeGetDeviceInfoPackage
class defines a custom React Native package for integrating the NativeGetDeviceInfoModule
as a Turbo Native Module. The getModule
method checks if the requested module name matches NativeGetDeviceInfoModule.NAME
and returns an instance of the module if it does, or null
otherwise. The getReactModuleInfoProvider
method supplies metadata about the module by creating a ReactModuleInfo
object. This ensures the module is correctly registered and recognized by the React Native framework.
Next, we need to inform React Native about how to locate this package in our main application.
\\nImport NativeGetDeviceInfoPackage
in the android/app/src/main/java/com/turbomoduleexample/MainApplication.kt
file as follows:
import com.nativegetdeviceinfo.NativeGetDeviceInfoPackage\\n\\n
Then, add the NativeGetDeviceInfoPackage
package to the getPackages
function:
override fun getPackages(): List<ReactPackage> =\\nPackageList(this).packages.apply {\\n // Packages that cannot be autolinked yet can be added manually here, for example:\\n // add(MyReactNativePackage())\\n add(NativeGetDeviceInfoPackage())\\n} \\n\\n
Now, we can invoke the methods in the NativeGetDeviceInfo
specification in our React Native code.
Update App.tsx
with the following:
import React, { useState, useEffect } from \'react\';\\nimport {\\n View,\\n Text,\\n Button,\\n StyleSheet,\\n} from \'react-native\';\\nimport NativeGetDeviceInfo from \'./specs/NativeGetDeviceInfo\';\\n\\nconst App = () => {\\n const [value, setValue] = useState<string | null>(\'\');\\n const getBatteryLevel = async () => {\\n const data = await NativeGetDeviceInfo?.getBatteryLevel();\\n setValue(data ?? \'\');\\n };\\n const getDeviceModel = async () => {\\n const data = await NativeGetDeviceInfo?.getDeviceModel();\\n setValue(data ?? \'\');\\n };\\n const getDeviceIpAddress = async () => {\\n const data = await NativeGetDeviceInfo?.getDeviceIpAddress();\\n setValue(data ?? \'\');\\n };\\n const getDeviceUptime = async () => {\\n const data = await NativeGetDeviceInfo?.getDeviceUptime();\\n setValue(data ?? \'\');\\n };\\n const getAndroidVersion = async () => {\\n const data = await NativeGetDeviceInfo?.getAndroidVersion();\\n setValue(data ?? \'\');\\n };\\n useEffect(() => {\\n getBatteryLevel();\\n }, []);\\n return (\\n <View style={styles.container}>\\n <Text style={styles.title}>{value}</Text>\\n <View style={styles.buttonContainer}>\\n <Button title={\'Check Battery Level\'} onPress={getBatteryLevel} />\\n </View>\\n <View style={styles.buttonContainer}>\\n <Button title={\'Check Device Model\'} onPress={getDeviceModel} />\\n </View>\\n <View style={styles.buttonContainer}>\\n <Button title={\'Check Device IP Address\'} onPress={getDeviceIpAddress} />\\n </View>\\n <View style={styles.buttonContainer}>\\n <Button title={\'Check Device Up time\'} onPress={getDeviceUptime} />\\n </View>\\n <View style={styles.buttonContainer}>\\n <Button title={\'Check Android Version\'} onPress={getAndroidVersion} />\\n </View>\\n </View>\\n );\\n};\\nconst styles = StyleSheet.create({\\n container: { flex: 1, padding: 20, backgroundColor: \'#f5f5f5\' },\\n title: { fontSize: 24, fontWeight: \'bold\', marginBottom: 20 },\\n taskTitle: { fontSize: 18 },\\n buttonContainer: {marginBottom: 20}\\n});\\n\\nexport default App;\\n\\n
This React Native code interacts with the custom Turbo Native Module, NativeGetDeviceInfo
, to retrieve device-specific information. It uses React’s useState
and useEffect
Hooks to manage state and perform initial data fetching.
The app includes functions to fetch and display information such as the battery level, device model, IP address, uptime, and Android version by calling corresponding native methods exposed through the module.
\\nThe final step in our tutorial is to update AndroidManifest.xml
with the following permissions to allow network and Wi-Fi state access, and to enable the getIPAddress
method to function properly:
<uses-permission android:name=\\"android.permission.ACCESS_NETWORK_STATE\\" />\\n <uses-permission android:name=\\"android.permission.ACCESS_WIFI_STATE\\" />\\n\\n
You can now build and run your code on an emulator or Android device:
\\nnpm run start\\n\\nThen press A to run on Android.\\n\\n
You can get the code for the final build here.
\\nIn this tutorial, we explored Turbo Native Modules, C++ Turbo Modules, and their role in React Native. We also built a custom Native Module for Android that allows our React Native app to access native mobile APIs to get info such as device model, IP address, uptime, battery status, battery level, and the Android version.
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nFormStep
type, schema, and data\\n SteppedForm
component\\n SteppedForm
component\\n FormProvider
and multi-step context\\n form
element\\n nextStep
function\\n localStorage
\\n localStorage
state\\n Long, complex forms can easily overwhelm your users, leading to frustration and potential abandonment. In the apps you build, you may continuously find yourself needing to collect a significant amount of information through forms — whether for onboarding, checkout, or survey processes.
\\nAfter making this once, I decided to create a reusable multi-step form component that I can reuse in all my React projects.
\\nIn this guide, I’ll walk through the process of building a reusable multi-step form component in React using React Hook Form and Zod for validation. This component will handle input validation, track form progress, and persist the form data in storage to prevent data loss and provide a smooth user experience.
\\nYou can download the source code from the project’s repository or view the live demo here. Here’s a look at what we’ll be creating:
\\nTo follow along, you should have:
\\nReact.Context
Let’s get to the meat!
\\nHere’s a summary of the packages we’ll be using to create this application:
\\nOpen up your terminal in your preferred directory and run this command to create a new React app with Vite and TypeScript:
\\npnpm create vite@latest multi-step-form\\n# Select React + TypeScript & SWC to follow along\\n\\n
Next, move into the project folder and install the packages mentioned earlier:
\\ncd multi-step-form\\npnpm install && pnpm add react-hook-form react-router-dom zod @mantine/hooks framer-motion lucide-react\\n\\n
This will create our foundation. Next, let’s install Tailwind and initialize shadcn in our project.
\\nAs mentioned above, we’ll be using shadcn, an open source collection of components, to design the form layout. This will allow us to focus more on implementing the form’s logic. If you need help, refer to the official documentation for guidance.
\\nInstall and initialize Tailwind with the following command:
\\npnpm add -D tailwindcss postcss autoprefixer\\n\\n
Then generate the tailwind.config.js
and postcss.config.js
files with the following command:
pnpm tailwindcss init -p\\n\\n
With the configuration files in place, add the Tailwind directives to your main stylesheet (e.g., src/index.css
):
@tailwind base;\\n@tailwind components;\\n@tailwind utilities;\\n\\n/* your custom css here */\\n\\n
Next, update your tailwind.config.js
file to specify the paths to your content files. This ensures Tailwind can purge unused styles in production:
/** @type {import(\'tailwindcss\').Config} */\\nmodule.exports = {\\n content: [\\"./index.html\\", \\"./src/**/*.{ts,tsx,js,jsx}\\"],\\n theme: {\\n extend: {},\\n },\\n plugins: [],\\n};\\n\\n
For better module resolution, configure tsconfig.json
with an alias for the src
directory. This will simplify imports throughout the project:
// tsconfig.json\\n{\\n \\"compilerOptions\\": {\\n \\"baseUrl\\": \\".\\",\\n \\"paths\\": {\\n \\"@/*\\": [\\"./src/*\\"]\\n }\\n }\\n}\\n\\n
Vite also comes with a tsconfig.app.json
in which we’ll do the same thing:
// tsconfig.app.json\\n{\\n \\"compilerOptions\\": {\\n \\"baseUrl\\": \\".\\",\\n \\"paths\\": {\\n \\"@/*\\": [\\"./src/*\\"]\\n }\\n }\\n}\\n\\n
Next, update your Vite configuration to recognize this alias. Open vite.config.ts
and add the following:
// vite.config.ts\\nimport path from \'path\'\\nimport { defineConfig } from \'vite\'\\nimport { fileURLToPath } from \'url\'\\nimport react from \'@vitejs/plugin-react-swc\'\\n\\nconst __dirname = fileURLToPath(new URL(\'.\', import.meta.url))\\n\\nexport default defineConfig({\\n plugins: [react()],\\n resolve: {\\n alias: {\\n \'@\': path.resolve(__dirname, \'./src\'),\\n },\\n },\\n})\\n\\n
With Tailwind configured, it’s time to set up shadcn. Initialize it by running the following:
\\npnpm dlx shadcn@latest init -d\\n\\n
During initialization, shadcn will perform checks, validate your framework, set up Tailwind, and update your project files. Once the process is complete, you’ll see an output like this:
\\n✔ Preflight checks.\\n✔ Verifying framework. Found Vite.\\n✔ Validating Tailwind CSS.\\n✔ Validating import alias.\\n✔ Writing components.json.\\n✔ Checking registry.\\n✔ Updating tailwind.config.ts\\n✔ Updating app\\\\app.css\\n✔ Installing dependencies.\\n✔ Created 1 file:\\n - app\\\\lib\\\\utils.ts\\n\\nSuccess! Project initialization completed.\\nYou may now add components.\\n\\n
Great! Now we have Tailwind and shadcn fully set up in your Vite + React + TypeScript project.
\\nLet’s install a few components we’ll need for this project — input, button, form, toast, and label. Run the following:
\\npnpm dlx shadcn@latest add input button form label toast\\n\\n
N.B., Installing shadcn’s form library installs the React Hook Form package.
\\nFormStep
type, schema, and dataRemember that reusability is our design goal here. We’ll start by defining the FormStep
type, which will hold the properties required in a new step. This includes the title, position, validation schema, and component amongst others — you can expand this how you creatively see fit.
Start by creating the FormStep
type in src/types.ts
. This represents a single step in the form:
// src/types.ts\\nimport { ZodType } from \'zod\';\\nimport { CombinedCheckoutType } from \'./validators/checkout-flow.validator\';\\nimport { LucideIcon } from \'lucide-react\';\\n\\ntype FieldKeys = keyof CombinedCheckoutType;\\n\\nexport type FormStep = {\\n title: string;\\n position: number;\\n validationSchema: ZodType<unknown>;\\n component: React.ReactElement;\\n icon: LucideIcon;\\n fields: FieldKeys[];\\n};\\n\\n
Here’s what each property represents:
\\ntitle
: The title of the stepposition
: The step’s order in the sequencevalidationSchema
: A Zod schema for validating the form fields within the stepcomponent
: A React component to render for the stepicon
: A Lucide icon for visual representationfields
: This is an array of strings in which each element matches a key (i.e., an input field) from the provided schema, making the form strongly typed and less error-proneYou’ll understand it fully when you see the implementation.
\\nSince we’re simulating a checkout process, let’s define validation schemas for each step in src/validators/checkout-flow.validator.ts
:
// src/validators/checkout-flow.validator.ts\\nimport { z } from \'zod\'\\n\\nexport const step1Schema = z.object({\\n email: z.string().email({ message: \'Please enter a valid email address\' }),\\n firstName: z.string().min(3, \'First name must be at least 3 characters\'),\\n lastName: z.string().min(3, \'Last name must be at least 3 characters\'),\\n})\\nexport const step2Schema = z.object({\\n country: z\\n .string()\\n .min(2, \'Country must be at least 2 characters\')\\n .max(100, \'Country must be less than 100 characters\'),\\n city: z\\n .string()\\n .min(2, \'City must be at least 2 characters\')\\n /* ... more fields ... */\\n})\\nexport const step3Schema = z.object({\\n /* ... cardNumber, carrdHolder, cvv ... */\\n})\\n\\n
To keep the form type-safe and make the schemas reusable, we merge the individual schemas into a single schema:
\\nexport const CombinedCheckoutSchema= step1Schema\\n .merge(step2Schema)\\n .merge(step3Schema)\\n\\nexport type CombinedCheckoutType = z.infer<typeof CombinedCheckoutSchema>\\n\\n
By merging the schemas, we combine the field definitions from all steps into one master schema. This allows us to infer a unified CombinedCheckoutSchema
type that includes all fields in the multi-step form — this combined schema will also come in handy when using React Hook Form.
Finally, define the checkoutSteps
array in src/pages/home.tsx
to represent the form steps:
import { FormStep } from \'@/types\'\\nimport Step1 from \'./checkout/step1\'\\nimport Step2 from \'./checkout/step2\'\\nimport Step3 from \'./checkout/step3\'\\nimport {\\n step1Schema,\\n step2Schema,\\n step3Schema,\\n} from \'@/validators/checkout-flow.validator\'\\nimport MultiStepForm from \'@/components/stepped-form/stepped-form\'\\nimport { HomeIcon, UserIcon, CreditCardIcon } from \'lucide-react\'\\n\\nexport const checkoutSteps: FormStep[] = [\\n {\\n title: \'Step 1: Personal Information\',\\n component: <Step1 />,\\n icon: UserIcon,\\n position: 1,\\n validationSchema: step1Schema,\\n fields: [\'email\', \'firstName\', \'lastName\'],\\n },\\n {\\n title: \'Step 2: Address Details\',\\n component: <Step2 />,\\n icon: HomeIcon,\\n position: 2,\\n validationSchema: step2Schema,\\n fields: [\'country\', \'city\', \'shippingAddress\'],\\n },\\n {\\n title: \'Step 3: Payment Details\',\\n component: <Step3 />,\\n icon: CreditCardIcon,\\n position: 3,\\n validationSchema: step3Schema,\\n fields: [\'cardNumber\', \'cardholderName\', \'cvv\'],\\n },\\n]\\n\\nexport default function Home() {\\n return (\\n <div>\\n <MultiStepForm steps={checkoutSteps} />\\n </div>\\n )\\n}\\n\\n
With that done, we can now create the SteppedForm
component to dynamically handle the form rendering, state, logic, and validation with the data in checkoutSteps
.
SteppedForm
componentThe SteppedForm
component is the backbone of our multi-step form design. It contains the form’s logic, tracks the current step, validates inputs, and provides functions for navigation.
When making this, I asked myself a few questions:
\\nValues like currentStep
,isFirstStep
, isLastStep
, and controller functions like nextStep
and previousStep
came to mind, and are pieces we’d need to make the multi-step form work.
React Hook Form uses the React Context, allowing us to share form state across components by having a parent <FormProvider />
component. This allows any child component to access the form state without needing to pass props manually.
We also want to have a custom hook to manage the form state — something like this:
\\nconst { isFirstStep, isLastStep, nextStep } = useMultiStepForm();\\n\\n
The simplest way I found to do this is by leveraging two context values: one from React Hook Form’s API and another from our custom useMultiStepForm
Hook.
This separation keeps the form logic clear while maintaining easy access to both React Hook Form’s form state and our step-based navigation.
\\n\\nReact’s Context API makes it easy to share state and logic while eliminating the need to pass props through multiple layers. The context holds all essential states and methods required by the form steps, navigation buttons, and progress indicator components.
\\nHere’s what we’re currently tracking in the context:
\\nexport interface MultiStepFormContextProps {\\n currentStep: FormStep;\\n currentStepIndex: number;\\n isFirstStep: boolean;\\n isLastStep: boolean;\\n nextStep: () => void;\\n previousStep: () => void;\\n goToStep: (step: number) => void;\\n steps: FormStep[];\\n}\\n\\n
currentStep
: The current form step being renderedcurrentStepIndex
: The index of the current step in the steps
arrayisFirstStep
/ isLastStep
: Booleans to determine if the user is at the start or end of the formnextStep
/ previousStep
: Functions to navigate between stepsgoToStep
: A function to jump to a specific stepsteps
: The full list of FormStep
objectsBy exposing these properties and methods, the context makes the form highly configurable and accessible to any child component.
\\nSteppedForm
componentIn this section, we’ll walk through the process of building the SteppedForm
component. We’ll start by defining the context for managing the form’s state and navigation, then set up the form structure using React Hook Form.
By the end of this section, you’ll have a functional multi-step form component that’s ready to be extended with additional features like navigation buttons, progress indicators, and anything else you choose to implement.
\\nNow, let’s move on to creating the SteppedForm
component:
// components/stepped-form/stepped-form.tsx\\nimport { z } from \'zod\';\\nimport { createContext, useState } from \'react\';\\nimport { FormProvider, useForm } from \'react-hook-form\';\\nimport { FormStep, MultiStepFormContextProps } from \'@/types\';\\nimport { zodResolver } from \'@hookform/resolvers/zod\';\\nimport { CombinedCheckoutSchema } from \'@/validators/checkout-flow.validator\';\\nimport PrevButton from \'@/components/stepped-form/prev-button\';\\nimport ProgressIndicator from \'./progress-indicator\';\\n\\nexport const MultiStepFormContext = createContext<MultiStepFormContextProps | null>(null);\\n\\nconst MultiStepForm = ({ steps }: { steps: FormStep[] }) => {\\n const methods = useForm<z.infer<typeof CombinedCheckoutSchema>>({\\n resolver: zodResolver(CombinedCheckoutSchema),\\n });\\n\\n // Form state\\n const [currentStepIndex, setCurrentStepIndex] = useState(0);\\n const currentStep = steps[currentStepIndex];\\n\\n // Navigation functions\\n const nextStep = () => {\\n if (currentStepIndex < steps.length - 1) {\\n setCurrentStepIndex(currentStepIndex + 1);\\n }\\n };\\n\\n const previousStep = () => {\\n if (currentStepIndex > 0) {\\n setCurrentStepIndex(currentStepIndex - 1);\\n }\\n };\\n\\n const goToStep = (position: number) => {\\n if (position >= 0 && position - 1 < steps.length) {\\n setCurrentStepIndex(position - 1)\\n saveFormState(position - 1)\\n }\\n }\\n\\n /* Form submission function */\\n async function submitSteppedForm(data: z.infer<typeof CombinedCheckoutSchema>) {\\n try {\\n // Perform your form submission logic here\\n console.log(\'data\', data);\\n } catch (error) {\\n console.error(\'Form submission error:\', error);\\n }\\n }\\n\\n // Context value\\n const value: MultiStepFormContextProps = {\\n currentStep: steps[currentStepIndex],\\n currentStepIndex,\\n isFirstStep: currentStepIndex === 0,\\n isLastStep: currentStepIndex === steps.length - 1,\\n goToStep,\\n nextStep,\\n previousStep,\\n steps,\\n };\\n\\n return (\\n <MultiStepFormContext.Provider value={value}>\\n <FormProvider {...methods}>\\n <div className=\\"w-[550px] mx-auto\\">\\n <ProgressIndicator />\\n <form onSubmit={methods.handleSubmit(submitSteppedForm)}>\\n <h1 className=\\"py-5 text-3xl font-bold\\">{currentStep.title}</h1>\\n {currentStep.component}\\n <PrevButton />\\n </form>\\n </div>\\n </FormProvider>\\n </MultiStepFormContext.Provider>\\n );\\n};\\n\\nexport default MultiStepForm;\\n\\n
A lot is going on here so let’s go over the important details one after the other:
\\nFormProvider
and multi-step contextAs mentioned earlier, React Hook Form’s FormProvider
is used to provide form methods to all child components. This allows us to manage form state and validation across multiple steps by using the useFormContext
Hook in place of useForm
.
The MultiStepFormContext
provides the necessary state and navigation functions we discussed to all child components, ensuring that buttons and progress indicators can interact with the form’s state.
form
elementThe form
element should wrap up all the steps of your multi-step form. This is crucial because nesting separate form
elements inside individual steps can cause issues.
Any <button>
inside the form with type=\\"submit\\"
(which is the default) will trigger form submission. To prevent premature submissions, only the button in the final step should have this attribute. More on this soon.
The appropriate step is rendered through the currentStep.component
component value.
We also initialize the form using useForm
from React Hook Form and pass it the schema (CombinedCheckoutSchema
) for validation. The zodResolver
ensures the form data is validated against the schema before submission.
The submitSteppedForm
function handles the form submission. For now, it simply logs the form data to the console, but you can replace this with your actual submission logic (e.g., sending data to an API).
The nextStep
, previousStep
, and goToStep
functions allow users to navigate between steps. These functions are provided to the context, making them accessible to components like PrevButton
, NextButton
, and ProgressIndicator
.
With this base structure, we’re confident that our SteppedForm
component is reusable and well encapsulated, only sharing state with the components that need it. Now, we can define and export a useMultiStep
function for use within child components:
// src/hooks/use-stepped-form.ts\\nimport { MultiStepFormContext } from \'@/components/stepped-form/stepped-form\'\\nimport { useContext } from \'react\'\\n\\nexport const useMultiStepForm = () => {\\n const context = useContext(MultiStepFormContext)\\n if (!context) {\\n throw new Error(\\n \'useMultiStepForm must be used within MultiStepForm.Provider\'\\n )\\n }\\n return context\\n}\\n\\n
nextStep
functionThe nextStep
function will handle step transitions. However, we’re going to modify this function further as we want to trigger validation on every step before transitioning to the next one:
const nextStep = async () => {\\n const isValid = await methods.trigger(currentStep.fields);\\n\\n if (!isValid) {\\n return; // Stop progression if validation fails\\n }\\n\\n // grab values in current step and transform array to object\\n const currentStepValues = methods.getValues(currentStep.fields)\\n const formValues = Object.fromEntries(\\n currentStep.fields.map((field, index) => [\\n field,\\n currentStepValues[index] || \'\',\\n ])\\n )\\n\\n // Validate the form state against the current step\'s schema\\n if (currentStep.validationSchema) {\\n const validationResult = currentStep.validationSchema.safeParse(formValues);\\n\\n if (!validationResult.success) {\\n validationResult.error.errors.forEach((err) => {\\n methods.setError(err.path.join(\'.\') as keyof SteppedFlowType, {\\n type: \'manual\',\\n message: err.message,\\n });\\n });\\n return; // Stop progression if schema validation fails\\n }\\n }\\n\\n // Move to the next step if not at the last step\\n if (currentStepIndex < steps.length - 1) {\\n setCurrentStepIndex(currentStepIndex + 1);\\n }\\n};\\n\\n
Here’s a breakdown of its flow:
\\n1. Trigger field validation
\\nThe first step in this function is to validate the input fields related to the current step. This is done using React Hook Form’s methods.trigger
function.
2. Grab current step values and transform into an object
\\nNext, we retrieve the values of the fields in the current step and transform them into an array for further validation. Because methods.getValues(currentStep.fields)
returns the values as an array — [\'[email protected]\', \'John\', \'Doe\']
— we use Object.fromEntries
to transform this array into an object where the keys are the field names and the values are the corresponding input values (e.g., { email: \'[email protected]\', firstName: \'John\', lastName: \'Doe\' }
).
3. Schema validation
\\nOnce the values are in the correct format, we validate them against the schema defined at currentStep.validationSchem
. Errors are reported using methods.setError
.
4. Lastly, if all validations pass, we move on to the next step.
\\nNow that we’ve set up SteppedForm
with the correct navigation functions, we can start to use them in custom buttons like a NextButton
and PreviousButton
or the progress indicator component. Let’s start with PrevButton
:
// prevbutton.tsx\\nimport { useMultiStepForm } from \'@/hooks/use-stepped-form\'\\nimport { Button } from \'../ui/button\'\\n\\nconst PrevButton = () => {\\n const { isFirstStep, previousStep } = useMultiStepForm()\\n\\n return (\\n <Button\\n variant=\'outline\'\\n type=\'button\'\\n className=\'mt-5\'\\n onClick={previousStep}\\n disabled={isFirstStep}\\n >\\n Previous\\n </Button>\\n )\\n}\\nexport default PrevButton\\n\\n
Now for NextButton
:
// nextbutton.tsx\\nconst NextButton = ({\\n onClick,\\n type,\\n ...rest\\n}: React.ButtonHTMLAttributes<HTMLButtonElement>) => {\\n const { isLastStep } = useMultiStepForm()\\n\\n return (\\n <Button\\n className=\\"text-white bg-black hover:bg-slate-950 transition-colors w-full py-6\\"\\n type={type ?? \'button\'}\\n onClick={onClick}\\n {...rest}\\n >\\n {isLastStep ? \'Submit\' : \'Continue\'}\\n </Button>\\n )\\n}\\n\\n
Remember that our form layout design enforces us to have only one button with the type=\\"submit\\"
attribute. NextButton
above acts plays two roles here — acting as type=\'button\'
that says Continue
for all steps up until the last where it says Submit
and triggers a form submit.
Each step in our form is a standalone component that follows a consistent pattern:
\\nnextStep
from useMultiStepForm
to move to the next stepLet’s take a look at Step1
:
const Step1 = () => {\\n const {\\n register,\\n getValues,\\n setError,\\n formState: { errors },\\n } = useFormContext<z.infer<typeof SteppedFlowSchema>>()\\n\\n const { nextStep } = useMultiStepForm()\\n\\n const handleStepSubmit = async () => {\\n const { email } = getValues()\\n\\n // Simulate check for existing email in the database\\n if (email === \'[email protected]\') {\\n setError(\'email\', {\\n type: \'manual\',\\n message: \'Email already exists in the database. Please use a different email.\',\\n })\\n return\\n }\\n\\n // move to the next step\\n nextStep()\\n }\\n\\n return (\\n <div className=\\"flex flex-col gap-3\\">\\n <div>\\n <Input {...register(\'email\')} placeholder=\\"Email\\" />\\n <ErrorMessage message={errors.email?.message} />\\n </div>\\n <NextButton onClick={handleStepSubmit} />\\n </div>\\n )\\n}\\n\\n
Here, we decide to make a (mock) query to the database before calling nextStep
. This would be the same pattern up until your last step, in this case, Step3
, where you explicitly assign a submit
type to the navigation button:
const Step3 = () => {\\n /* ... */\\n const handleStepSubmit = async () => {\\n return\\n }\\n\\n return (\\n <div className=\\"flex flex-col gap-3\\">\\n {/* Form fields here */}\\n <NextButton type=\\"submit\\" onClick={handleStepSubmit} />\\n </div>\\n )\\n}\\n\\n
It is generally good practice to give visual feedback to your users on their progress so they don’t feel lost or overwhelmed. We will achieve this with the progress indicator component below — generated by v0!
\\n// progress-indicator.tsx\\nexport default function ProgressIndicator() {\\n const { currentStep, goToStep, currentStepIndex } = useMultiStepForm()\\n\\n return (\\n <div className=\\"flex items-center w-full justify-center p-4 mb-10\\">\\n <div className=\\"w-full space-y-8\\">\\n <div className=\\"relative flex justify-between\\">\\n {/* Progress Line */}\\n <div className=\\"absolute left-0 top-1/2 h-0.5 w-full -translate-y-1/2 bg-gray-200\\">\\n <motion.div\\n className=\\"h-full bg-black\\"\\n initial={{ width: \'0%\' }}\\n animate={{\\n width: `${(currentStepIndex / (checkoutSteps.length - 1)) * 100}%`,\\n }}\\n transition={{ duration: 0.3, ease: \'easeInOut\' }}\\n />\\n </div>\\n {/* Steps */}\\n {checkoutSteps.map((step) => {\\n const isCompleted = currentStepIndex > step.position - 1\\n const isCurrent = currentStepIndex === step.position - 1\\n\\n return (\\n <div key={step.position} className=\\"relative z-10\\">\\n <motion.button\\n onClick={() => goToStep(step.position)}\\n className={`flex size-14 items-center justify-center rounded-full border-2 ${\\n isCompleted || isCurrent\\n ? \'border-primary bg-black text-white\'\\n : \'border-gray-200 bg-white text-gray-400\'\\n }`}\\n animate={{\\n scale: isCurrent ? 1.1 : 1,\\n }}\\n >\\n {isCompleted ? (\\n <Check className=\\"h-6 w-6\\" />\\n ) : (\\n <step.icon className=\\"h-6 w-6\\" />\\n )}\\n </motion.button>\\n </div>\\n )\\n })}\\n </div>\\n </div>\\n </div>\\n )\\n}\\n\\n
The component uses currentStepIndex
to calculate the width of the progress line and highlight the current step.
localStorage
One of the most frustrating experiences in web forms is losing your progress. It’s annoying enough to make a user abandon the process — this often translates to leaving money on the table. Let’s address this by persisting the form state to localStorage
.
First, what does the structure of the data we’re storing look like?
\\ntype StoredFormState = {\\n currentStepIndex: number\\n formValues: Record<string, unknown>\\n}\\n\\n
In addition to saving the form state, we also want to save the current step (or step index) to ensure they continue exactly where they left off.
\\nlocalStorage
stateWe start by initializing the stored form state from localStorage
in MultiStepForm
. To ensure reusability, we’ll require our component to collect the localStorageKey
prop. This prevents conflicts when multiple multi-step forms exist in the same application.
Using Mantine’s useLocalStorage
Hook, we create a stateful local storage item that holds the form’s progress:
// stepped-form.tsx\\nconst [savedFormState, setSavedFormState] = useLocalStorage<SavedFormState | null>({\\n key: localStorageKey,\\n defaultValue: null,\\n})\\n\\n
If there’s an existing saved form state, we restore it when MultiStepForm
mounts using React Hook Form’s methods.reset()
:
// stepped-form.tsx\\nuseEffect(() => {\\n if (savedFormState) {\\n setCurrentStepIndex(savedFormState.currentStepIndex)\\n methods.reset(savedFormState.formValues)\\n }\\n}, [methods, savedFormState])\\n\\n
This ensures that if a user refreshes the page or revisits the form, they pick up exactly where they left off.
\\nNext, we define a function to save the form state to localStorage
:
// stepped-form.tsx\\nconst saveFormState = (stepIndex: number) => {\\n setSavedFormState({\\n currentStepIndex: stepIndex ?? currentStepIndex,\\n formValues: methods.getValues(),\\n });\\n};\\n\\n
In React, state updates are asynchronous. When a user navigates to a new step, currentStepIndex
is updated after the navigation occurs. If we save the form state using the old currentStepIndex
, we will store the wrong step index.
For example:
\\ncurrentStepIndex = 0
)Next
to move to Step 2currentStepIndex
is still 0
until the state update completesTo avoid this, we explicitly pass the next step’s index when saving.
\\nWhen the form is successfully submitted, or the user wants to start over, we should clear localStorage
:
const clearFormState = () => {\\n methods.reset();\\n setCurrentStepIndex(0);\\n setSavedFormState(null);\\n window.localStorage.removeItem(localStorageKey);\\n};\\n\\n
Pretty straightforward. We also delete the local storage item entirely.
\\nNow we can use these functions in the navigation functions, right before the navigation takes place:
\\n// stepped-form.tsx\\nconst nextStep = async () => {\\n /* ... */\\n if (currentStepIndex < steps.length - 1) {\\n saveFormState(currentStepIndex + 1)\\n setCurrentStepIndex(currentStepIndex + 1)\\n }\\n}\\n\\nconst previousStep = () => {\\n /* ... */\\n if (currentStepIndex > 0) {\\n saveFormState(currentStepIndex - 1)\\n setCurrentStepIndex(currentStepIndex - 1)\\n }\\n}\\n\\nconst goToStep = (position: number) => {\\n if (position >= 0 && position - 1 < steps.length) {\\n saveFormState(position - 1)\\n setCurrentStepIndex(position - 1)\\n }\\n}\\n\\n
This guarantees that whenever the user moves between steps, their progress is saved immediately.
\\nAnd there you have it! We’ve built a reusable, type-safe multi-step form component that handles validation, and persistent form data prevents data loss and provides a smooth user experience. The component’s architecture makes it easy to add new steps or modify existing ones without touching the core logic.
\\nI’ve needed a component like this a few times, so I decided to make a reusable one. Personally, I’d say a multi-step component should exist in a component library like shadcn! 🙂
\\nThe complete source code is available in the repository. Contributions are welcome and feel free to adapt it to your needs or use it as inspiration for your own form implementations.
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\n%{time_total}
\\n -o /dev/null
(optional)\\n -s
(optional)\\n The speed and responsiveness of network communications are crucial parts of modern-day web technology.
\\nWhether browsing a website, accessing a cloud service, or interacting with an API, users expect a seamless experience. A key metric that directly impacts this experience is round-trip time (RTT).
\\nRTT is a fundamental network performance metric for measuring latency between a client and a host. Essentially, it is the time it takes for a data packet to travel from a source to a destination and back again:
\\nRTT measurements can reveal critical insights about network latency, response time, and potential bottlenecks. These measurements can benefit developers and network administrators, as they play a significant role in diagnosing performance issues and ensuring optimal application responsiveness, which is essential in delivering a positive user experience.
\\nIn this article, I’ll guide you through the steps to measure round-trip time using cURL, a transfer tool used to transfer data from or to a server. We’ll also look at different RTT techniques and advanced usage and compare cURL to other available tools.
\\nTools like Ping and Traceroute are commonly used to measure network latency. While these tools offer a simple and straightforward way of measuring RTT metrics, they provide very little and sometimes vague information and are limited to basic data transfers.
\\nIn contrast, cURL supports multiple data transfer protocols, which lets it measure real HTTP transactions and provide timing data that accurately reflects actual user experiences. This makes cURL particularly useful for developers, system administrators, and DevOps engineers who need to:
\\nNow, let’s compare RTT metrics from cURL and Ping:
\\nAs you can see, cURL provides a more detailed and comprehensive output than Ping. While each tool has its use cases, cURL is your best bet if you need an in-depth measurement of your application’s RTT.
\\nAs explained in the previous section, round-trip time measurement in cURL consists of multiple distinct components that contribute to the total time of a request. Let’s break them down:
\\nDNS resolution is often the first step in making an HTTP request. The process involves looking up the domain name in the local cache, querying DNS servers if not cached, and retrieving the target domain’s IP address.
\\nAfter DNS resolution, the TCP three-way handshake occurs:
\\nSYN
packet sent to the serverSYN-ACK
ACK
These may look alien to you, but they essentially describe the process by which a client and host initiate a connection.
\\nSimply put, when a connection is established, the client sends a SYN
(Synchronize) packet to the host. The server receives the SYN
and replies with a SYN-ACK
(Synchronize-Acknowledgment). If everything goes well, the client receives the server’s SYN-ACK
and sends an ACK
(Acknowledge).
You can learn more about the TCP three-way handshake on the MDN docs.
\\nTTFB measures the time between sending an HTTP request and receiving the first byte of the response. This metric measures the:
\\nThis is the total time taken for the entire data transfer, from request initiation to completion. It includes all phases of the request mentioned earlier. You can think of this as the variable that outputs the overall RTT measurement.
\\nUnderstanding these components will help you identify bottlenecks and devise performance optimization strategies based on which component is causing delays.
\\n\\nFor example, a high DNS resolution time might indicate issues with the DNS server, network configuration problems, or the need for DNS caching improvements. On the other hand, a long TTFB could point to application code inefficiency, network congestion, or the need for database query optimization.
\\nNow that we understand what round-trip time is and how cURL uses multiple protocols to measure this metric, let’s look at some basic uses of cURL for RTT measurement.
\\nBefore diving into the nitty gritty of RTT measurement, you must ensure cURL is properly installed on your machine.
\\nMost Unix-like systems, such as Linux and macOS, come with cURL pre-installed. For Windows users, you can download it from the official cURL website or install it via Chocolatey using the following command, assuming you have Chocolatey installed on your machine:
\\nchoco install curl\\n\\n
To verify your installation, open a terminal and run:
\\ncurl --version\\n\\n
The simplest way to measure response time with cURL is using the -w
(write-out) flag with timing parameters such as time_total
:
curl -w \\"\\\\nTotal time: %{time_total} seconds\\\\n\\" -o /dev/null -s http://www.example.com\\n\\n
This command contains -w
.
This flag is the key to measuring RTT in cURL. It enables a write-out format string that cURL uses to output specific variables related to the request. These variables capture different stages of the connection and data transfer, which gives you a breakdown of the overall request time.
\\n%{time_total}
This is a timing variable that represents the total time taken for the entire request, from the initial request, until the complete response is received. There are several other variables, we’ll explore more on them later in this article.
\\n-o /dev/null
(optional)This is a command that tells cURL to discard the response body it fetches by default.
\\n-s
(optional)This suppresses the progress meter and lets the RTT operation run in silent mode.
\\nIf you run this command against an actual server URL, say Bing at https://www.bing.com
, the response will look something like this:
The request takes 0.836160
seconds (approximately 836ms
) to resolve. Remember, the -s
flag lets RTT operate in silent mode, which is why we get a result as simple as the one shown in this example. If you remove the -s
flag, here’s what you’ll see:
The result remains similar, with the only difference being that the progress is displayed.
\\nFor detailed timing information, we can customize the format string to include any combination of timing variables, along with descriptive labels:
\\ncurl -w \\"\\\\nDNS: %{time_namelookup}s\\\\nConnect: %{time_connect}s\\\\nTTFB: %{time_starttransfer}s\\\\nTotal: %{time_total}s\\\\n\\" \\\\\\n -o /dev/null -s https://bing.com/\\n\\n
This command will connect cURL to https://www.bing.com
and then output the values of the time_connect
, time_starttransfer
, and time_total
timing variables with descriptive labels, followed by a newline character (\\\\n
):
As you can see from the example above, these variables represent the RTT components we discussed earlier:
\\ntime_namelookup
— DNS resolution timetime_connect
— TCP connection timetime_starttransfer
— TTFBtime_total
— Total Transfer TimecURL RTT commands can get lengthy, and writing them out every time you need to run a test can be tedious. Thankfully, timing options can be defined in custom format files for automated monitoring.
\\nLet’s suppose you want to measure an RTT metric using multiple timing variables:
\\nDNS Lookup: %{time_namelookup}s\\n TCP Connection: %{time_connect}s\\n TLS Handshake: %{time_appconnect}s\\n Server Processing: %{time_pretransfer}s\\n Content Transfer: %{time_starttransfer}s\\n Total Time: %{time_total}s\\n\\n
Typing them in the command-line tool one by one would not only become tedious but also time-consuming. Instead, you can create a custom .txt
file, add the variables, and then use it with cURL like so:
curl -w \\"@curl-format.txt\\" -o /dev/null -s https://bing.com\\n\\n
This way, you don’t have to type out all the timing variables every time you need to measure RTT metrics. Note that you need to be in the same directory as the created file for this to work without any issues.
\\nWhen benchmarking, it’s important to account for network variations and other factors that may impact measurement results in order to obtain an accurate average response time. This cannot be achieved with a single RTT request. But instead of repeatedly running the measurement request manually, you can create a loop to run the request a specified number of times:
\\nfor i in {1..5}; do\\n curl -w \\"%{time_total}\\\\n\\" -o /dev/null -s http://www.google.com\\ndone\\n\\n
This for
loop block will run the curl
command five times and output five different RTT results. You can then use these outputs to calculate the average response time of the measurement:
For many developers, the primary interest in running RTT measurement analysis is testing servers and APIs for comprehensive insight into their performance and status. For this purpose, measuring connection timing and transfer timing alone, as we’ve done in the previous section, is simply not sufficient.
\\nBy combining the timing options we’ve used previously with other advanced cURL timing options, you can gain deeper insights into the different phases of the network transaction and understand the performance of your server’s HTTP requests. This will allow you to break down the process into distinct segments, which can be vital for diagnosing latency issues.
\\nThis is what an advanced custom file for cURL timing options should look like:
\\n\\\\n\\nRunning curl timing for: %{url_effective}\\n\\nDNS Lookup Timing:\\n namelookup: %{time_namelookup}s\\n\\nConnection Timing:\\n connect: %{time_connect}s\\n appconnect: %{time_appconnect}s\\n pretransfer: %{time_pretransfer}s\\n\\nTransfer Timing:\\n starttransfer: %{time_starttransfer}s\\n total: %{time_total}s\\n redirect: %{time_redirect}s\\n\\nData Metrics:\\n size_download: %{size_download} bytes\\n size_upload: %{size_upload} bytes\\n speed_download: %{speed_download} bytes/sec\\n\\nAdditional Info:\\n http_code: %{http_code}\\n num_connects: %{num_connects}\\n num_redirects: %{num_redirects}\\n remote_ip: %{remote_ip}\\n\\\\n\\n\\n
We’ve added additional timing options that capture the request’s data metrics and extra information such as the HTTP status, remote IP, and more.
\\nThis outputs a very detailed and comprehensive RTT measurement:
\\nAdditionally, we can use several headers and options provided by cURL to enhance our timing analyses. For example, we can use the User-Agent
header to mimic a web browser for testing servers that block requests without a valid User-Agent
header due to security reasons.
Headers are specified using the -H
flag followed by the header option string. Here’s how you can modify your command to add the User-Agent
header:
curl -w \\"@curl-format.txt\\" -o /dev/null -s http://www.google.com -H \'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.86 Safari/537.36\'\\n\\n
Let’s break down what this User-Agent
string represents:
Mozilla/5.0
— A common identifier for compatibilityMacintosh; Intel Mac OS X 10_11_4
— Indicates the operating system (macOS 10.11.4)AppleWebKit/537.36
— The rendering engine used by Safari and ChromeChrome/50.0.2661.86
— The browser version (Chrome 50)Safari/537.36
— Indicates Safari compatibilityHere’s a list of some headers you can use to enhance your RTT measurements:
\\n-H \'Accept-Encoding: gzip, deflate, sdch\'
— This header allows the server to compress and reduce the amount of data transferred over the network using a compression algorithm the client can understand-H \'Upgrade-Insecure-Requests: 1\'
— Signals to the server that the client prefers to use HTTPS instead of HTTP for insecure resources-H \'Connection: keep-alive\'
— Requests that the connection remains open after the request is completedThe timing options we’ve used so far are enough to give you proper insight into how well your server is performing, but what if you want to test API endpoints?
\\nYou can test individual endpoints with the detailed timing configurations we’ve demonstrated and obtain vital information such as HTTP status. However, you can take it a step further by testing the RTT for different HTTP methods, such as POST
and PUT
:
# POST request with payload\\ncurl -w \\"\\\\nTotal Time: %{time_total}s\\" \\\\\\n-X POST \\\\\\n-H \\"Content-Type: application/json\\" \\\\\\n-d \'{\\"key\\": \\"value\\"}\' \\\\\\n-o /dev/null -s \\\\\\n\\n\\n\\n# PUT request\\ncurl -w \\"\\\\nTotal Time: %{time_total}s\\\\n\\" \\\\\\n-X PUT \\\\\\n-H \\"Content-Type: application/json\\\\n\\" \\\\\\n-d \'{\\"key\\": \\"updated_value\\"}\' \\\\\\n-o /dev/null -s \\\\\\nScottish strategy at its finest
Search with Microsoft Bing and use the power of AI to find information, explore webpages, images, videos, maps, and more. A smart search engine for the forever curious.
\\nScottish strategy at its finest
Search with Microsoft Bing and use the power of AI to find information, explore webpages, images, videos, maps, and more. A smart search engine for the forever curious.
To test more HTTP methods, replace the -X
flag’s value with any other HTTP method:
When doing an RTT test, a likely use case would be to test your server or web application against another to determine their performance. This comparison will help identify which application provides faster response times and better overall efficiency.
\\nThis can be achieved by creating a script such as the one below:
\\n#!/bin/bash\\n\\nfunction measure_rtt() {\\n local url=$1\\n local count=$2\\n\\n echo \\"Measuring RTT for $url ($count requests)\\"\\n echo \\"----------------------------------------\\"\\n\\n for ((i=1; i<=$count; i++)); do\\n curl -w \\"%{time_total}\\" -o /dev/null -s \\"$url\\"\\n echo \\"\\"\\n done\\n}\\n\\n# Compare urls\\nurls=(\\n \\"https://api.example.com/endpoint1\\"\\n \\"https://api.example.com/endpoint2\\"\\n)\\n\\nfor url in \\"${urls[@]}\\"; do\\n measure_rtt \\"$url\\" 5\\n echo \\"\\"\\ndone\\n\\n
The script iterates over an array of URLs and calls the measure_rtt
function with each URL and a count of five requests. For each URL, the RTT measurement is performed five times.
Here’s an output of a comparison between Google and Bing:
\\nThis test shows that bing.com
has a higher response time than google.com
in my region.
Assessing how your application or server performs from various points around the globe is a crucial part of benchmarking. This is because it allows you to identify regional performance issues and make data-driven decisions to enhance the efficiency and responsiveness of your application.
\\ncURL provides a --resolve
option that lets you simulate accessing a domain from a specific DNS resolver by forcing cURL to resolve the domain to a specific IP address:
curl -w \\"\\\\nTotal Time: %{time_total}s\\\\n\\\\n\\" \\\\\\n--resolve \\"bing.com:443:1.2.3.4\\" \\\\\\n-o /dev/null -s \\\\\\n\\n\\nScottish strategy at its finest
Search with Microsoft Bing and use the power of AI to find information, explore webpages, images, videos, maps, and more. A smart search engine for the forever curious.
This command will force cURL to resolve bing.com
to the IP address 1.2.3.4
, which is a placeholder for the specified port 443
, which must match the protocol being used (i.e. 443
for HTTPS, 80
for HTTP).
To use this command, replace:
\\nbing.com
with the domain you want to test1.2.3.4
with the IP address you want to force the domain to resolve to443
with the appropriate protocolFor example, if we resolve bing.com
to a Cloudflare CDN in Kenya, the result will look something like this:
Meanwhile, if we resolve it to a CDN in Brazil, we’ll get a very different result:
\\nAlthough cURL does come off as a one-size-fits-all solution for RTT and network diagnostics, you shouldn’t be quick to dismiss other network diagnostic tools. This is because they provide different perspectives on network performance and can help you verify and cross-check cURL’s results.
\\nAs mentioned earlier, these tools have distinct functionalities, so let’s compare them to curl:
\\nPing sends ICMP echo request packets to a target host and measures the time it takes for the response to be received. Think of ICMP echo requests as a type of network message used primarily for diagnostic purposes.
\\nHowever, unlike cURL, Ping primarily measures the time it takes for basic network-level communication. It doesn’t provide information about higher-level protocols like HTTP or HTTPS. It also doesn’t capture the time taken for server-side processing.
\\nPing and cURL serve different purposes and operate at different layers of the network stack, so they cannot be directly integrated into a single command or tool. However, you can use them together in a script or workflow to achieve combined functionality.
\\nYou can do this by creating a comparison script that outputs results for both Ping and cURL simultaneously:
\\n#!/bin/bash\\n\\ncompare_network_tools() {\\n local host=$1\\n\\n echo \\"Network Diagnostic Comparison for $host\\"\\n echo \\"----------------------------------------\\"\\n\\n # Ping Statistics\\n echo \\"Ping Results:\\"\\n ping_result=$(ping -c 5 \\"$host\\" | grep -E \\"time=|packet loss\\")\\n echo \\"$ping_result\\"\\n\\n # Curl Timing\\n echo -e \\"\\\\nCurl Timing:\\"\\n curl_result=$(curl -w \\"DNS: %{time_namelookup}s\\\\nConnect: %{time_connect}s\\\\nTotal Time: %{time_total}s\\" \\\\\\n -o /dev/null -s \\"$host\\")\\n echo \\"$curl_result\\"\\n}\\n\\ncompare_network_tools bing.com\\n\\n
The resulting output:
\\nTraceroute traces the path that packets take from the source to the destination. It identifies each step along the way and provides information about the routers involved in the connection and the time taken for packets to reach each point.
\\nSimilarly, Traceroute can be used alongside cURL in a script or workflow to diagnose network issues. If cURL measurements indicate high RTT, Traceroute can help pinpoint the specific network segments or routers contributing to the delay:
\\n#!/bin/bash\\n\\nadvanced_network_diagnostics() {\\n local target=$1\\n\\n # Traceroute with AS (Autonomous System) mapping\\n echo \\"Network Path Trace:\\"\\n traceroute -A \\"$target\\"\\n\\n\\n # Curl timing for correlation\\n echo -e \\"\\\\nCurl Performance:\\"\\n curl -w \\"Total Time: %{time_total}s\\\\nDNS: %{time_namelookup}s\\\\n\\" \\\\\\n -o /dev/null -s \\"$target\\"\\n}\\n\\nadvanced_network_diagnostics bing.com\\n\\n
These tools may not be on par with cURL in terms of functionalities, but their distinct nature makes them excellent complementary tools for understanding network behavior.
\\nIn this article, we explored how to effectively use cURL to measure round-trip time. We covered the fundamental concepts of RTT, its impact on network performance, and its role in ensuring a responsive user experience.
\\nWe also examined how you can use cURL’s write-out (-w
) flag to run RTT requests and extract detailed information on various stages of the request-response cycle, including time_connect
,time_starttransfer
, and time_total
, to gain comprehensive insights into the performance of a host and the time it takes to respond to a client’s request.
Finally, we explored advanced timing techniques to enhance RTT requests for more comprehensive results, and compared cURL to similar tools for running RTT requests. Hope you found it useful!
\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nuseCallback is a React Hook that memoizes functions, ensuring they maintain a stable reference across renders unless their dependencies change. This helps optimize performance by preventing unwanted re-renders in child components.
\\nReact applications often suffer from unnecessary re-renders, which can negatively impact performance. One common cause is when functions are recreated on every render, leading to inefficiencies, especially when passed as props to memoized components. This is where useCallback comes in.
\\nBy the end of this guide, you’ll have a clear understanding of useCallback
and how to use it properly in your React applications.
useCallback
used for?useCallback
is used to prevent function recreation on every render, improving performance in React applications.
useCallback
and useMemo
?While useCallback memoizes functions, useMemo memoizes values.
\\nCallbacks are useful when passing functions to memoized components (React.memo) or optimizing event handlers in performance-critical applications.
\\nuseEffect
and useCallback
?Both are React hooks, but useEffect runs side effects after renders, while useCallback
stabilizes function references.
Before discussing the useCallback hook, let’s understand function reference and why function reference stability matters in React.
\\nIn JavaScript, functions are objects. Each time a function is declared inside a component, a new function instance is created with a different reference in memory. For example:
\\nfunction MyComponent() {\\n const handleClick = () => {\\n console.log(\'Clicked\');\\n };\\n \\n return <Button onClick={handleClick} />;\\n}\\n\\n
In the above code snippet, handleClick
is recreated on every render. Even if the logic inside it hasn’t changed, its reference is new. This can cause unnecessary re-rendering when the function is passed as a prop to a memoized child component (React.memo
).
When a function reference changes, any memoized child component receiving that function as a prop will re-render even if the function’s behavior hasn’t changed.
\\nconst Parent = () => {\\n const handleClick = () => console.log(\'Clicked\');\\n return <Child onClick={handleClick} />;\\n};\\n\\nconst Child = React.memo(({ onClick }) => {\\n console.log(\'Child rendered\');\\n return <button onClick={onClick}>Click Me</button>;\\n});\\n\\n
Without stabilizing the handleClick
function with the useCallback
hook, Child
will re-render on every render of Parent
, even if the product list remains unchanged.
useCallback() is one of React’s performance optimization hooks that caches a function declaration on every render and returns the same function without reference changes if the dependencies remain unchanged since the previous render.
\\nconst memoizedFunction = useCallback(() => {\\n //logic here\\n}, [dependency1, dependency2, ...]);\\n\\n
The useCallback
hook takes two arguments. The first is the function you want to memoize, and the second is a dependency array. Whenever any value in this array changes, the function is recreated with a new reference.
Let’s consider an ecommerce admin case study where a product list page displays the list of product items and the product item component receives a function prop to delete the product from the list.
\\nCreate a ProductList.jsx
component in your React project and add the following:
import React, { memo, useState } from \'react\';\\nconst ProductItem = memo(({ product, onDelete }) => {\\n console.log(\\"Rendering product item component\\")\\n return (\\n <div className=\\"p-4 w-full border rounded-md shadow-sm mb-4 flex flex-col items-center text-center\\">\\n <img \\n src={product.image} \\n alt={product.name} \\n className=\\"w-32 h-32 object-cover rounded-md mb-2\\"\\n />\\n <h3 className=\\"text-lg font-semibold\\">{product.name}</h3>\\n <p className=\\"text-sm text-gray-600 mb-4\\">{product.description}</p>\\n <button \\n onClick={() => onDelete(product.id)} \\n className=\\"bg-red-500 text-white px-4 py-2 rounded-md hover:bg-red-600\\"\\n >\\n Delete\\n </button>\\n </div>\\n );\\n});\\nconst ProductList = () => {\\n const [isLoggedIn, setIsLoggedIn] = useState(false) \\n const [products, setProducts] = useState([\\n { \\n id: 1, \\n name: \'Product 1\', \\n description: \'Description for Product 1\', \\n image: \'https://res.cloudinary.com/muhammederdem/image/upload/q_60/v1536405217/starwars/item-2.webp\' \\n },\\n { \\n id: 2, \\n name: \'Product 2\', \\n description: \'Description for Product 2\', \\n image: \'https://res.cloudinary.com/muhammederdem/image/upload/q_60/v1536405217/starwars/item-4.webp\' \\n },\\n { \\n id: 3, \\n name: \'Product 3\', \\n description: \'Description for Product 3\', \\n image: \'https://res.cloudinary.com/muhammederdem/image/upload/q_60/v1536405217/starwars/item-3.webp\' \\n },\\n { \\n id: 4, \\n name: \'Product 4\', \\n description: \'Description for Product 4\', \\n image: \'https://res.cloudinary.com/muhammederdem/image/upload/q_60/v1536405217/starwars/item-1.webp\' \\n }\\n ]);\\n const toggleLogin = () => {\\n setIsLoggedIn(val => !val );\\n };\\n const deleteProduct = (id) => {\\n setProducts(products.filter(product => product.id !== id));\\n };\\n return (\\n <div className=\\"w-full p-10\\">\\n <h2 className=\\"text-2xl font-bold mb-6 text-center\\">Product List</h2>\\n {isLoggedIn ? <button \\n onClick={toggleLogin} \\n className=\\"bg-red-500 text-white px-4 py-2 rounded-md hover:bg-red-600 mb-6\\"\\n >\\n Log out\\n </button> : <button \\n onClick={toggleLogin} \\n className=\\"bg-blue-500 text-white px-4 py-2 rounded-md hover:bg-blue-600 mb-6\\"\\n >\\n Log in\\n </button>}\\n <div className=\'flex space-x-10 w-full\'>\\n {products.length > 0 ? (\\n products.map(product => (\\n <ProductItem \\n key={product.id} \\n product={product} \\n onDelete={deleteProduct} \\n />\\n ))\\n ) : (\\n <p className=\\"text-gray-500 text-center\\">No products available.</p>\\n )}\\n </div>\\n </div>\\n );\\n};\\nexport default ProductList;\\n\\n
React.memo
wraps the ProductItem
component to prevent unnecessary re-renders. This means ProductItem
will only re-render if its product
or onDelete
props change.
We’ve added console log to check whether the component re-renders when the isLoggedIn
state updates. Without memo
, ProductItem
would re-render every time isLoggedIn
changes, even if the product list remains unchanged.
Running the project should result in the following:
\\nDid you notice that the ProductItem
component re-renders every time we click the Log in or Log out button? This makes memoization ineffective.
Imagine having thousands of products in the list — these unnecessary re-renders could slow down the app significantly. If the buttons are clicked repeatedly, it might even lead to performance issues or crashes.
\\nThe problem is that each time the isLoggedIn
state changes, the ProductList
component re-renders and recreates the deleteProduct
function with a new reference, causing unnecessary re-rendering when the function is passed as a prop to the memoized ProductItem
component (React.memo
).
To resolve this issue, we have to stabilize the deleteProduct
function reference by wrapping it in a useCallback
hook:
const deleteProduct = useCallback((id) => {\\n setProducts((prevProducts) => prevProducts.filter(product => product.id !== id));\\n}, []);\\n\\n
Now the ProductItem
component no longer re-renders after clicking the Log in or Log out button.
Imagine you’re building an e-commerce app where users can infinitely scroll through products. The product list is fetched from an API, and users can favorite items by clicking a heart icon. To optimize performance, we want to avoid unnecessary function re-creations every time the component re-renders.
\\nIf you think wrapping the toggleFavorite
function in useCallback
is the right approach, you’re correct.
Here is an implementation of this feature:
\\nimport React, { useState, useCallback } from \'react\';\\n\\nconst ProductItem = React.memo(({ product, onFavorite }) => {\\n console.log(`Rendering ${product.name}`);\\n return (\\n <div className=\\"p-4 w-full border rounded-md shadow-sm mb-4 flex flex-col items-center text-center\\">\\n <img \\n src={product.image} \\n alt={product.name} \\n className=\\"w-32 h-32 object-cover rounded-md mb-2\\"\\n />\\n <h3 className=\\"text-lg font-semibold\\">{product.name}</h3>\\n <button \\n onClick={() => onFavorite(product.id)} \\n className={`px-4 py-2 rounded-md ${\\n product.isFavorite ? \'bg-red-500\' : \'bg-gray-300\'\\n }`}\\n >\\n {product.isFavorite ? \\"❤️ Unfavorite\\" : \\"🤍 Favorite\\"}\\n </button>\\n </div>\\n );\\n});\\n\\nconst ProductList = () => {\\n const [products, setProducts] = useState([\\n { id: 1, name: \'Product 1\', image: \'https://res.cloudinary.com/muhammederdem/image/upload/q_60/v1536405217/starwars/item-2.webp\', isFavorite: false },\\n { id: 2, name: \'Product 2\', image: \'https://res.cloudinary.com/muhammederdem/image/upload/q_60/v1536405217/starwars/item-1.webp\', isFavorite: false },\\n { id: 3, name: \'Product 3\', image: \'https://res.cloudinary.com/muhammederdem/image/upload/q_60/v1536405217/starwars/item-3.webp\', isFavorite: false }\\n ]);\\n\\n const toggleFavorite = useCallback((id) => {\\n setProducts(products.map(product =>\\n product.id === id ? { ...product, isFavorite: !product.isFavorite } : product\\n ));\\n }, [products]); \\n\\n return (\\n <div className=\\"w-full p-10\\">\\n <h2 className=\\"text-2xl font-bold mb-6 text-center\\">Product List</h2>\\n <div className=\\"flex space-x-10 w-full\\">\\n {products.map(product => (\\n <ProductItem \\n key={product.id} \\n product={product} \\n onFavorite={toggleFavorite} \\n />\\n ))}\\n </div>\\n </div>\\n );\\n};\\nexport default ProductList;\\n\\n
Running the project should produce the following result:
\\nDid you notice that when you click the Favorite or Unfavorite button for a product, the ProductItem
component re-renders for all products — even though we used useCallback
to stabilize the toggleFavorite
function?
This happens because products
is unnecessarily included in the dependency array of useCallback
. Every time a product’s isFavorite
state changes, the entire products
state updates. As a result, toggleFavorite
gets recreated with a new reference, causing all ProductItem
components to re-render.
We can optimize this by removing products
from the dependency array, like this:
const toggleFavorite = useCallback((id) => {\\n setProducts((prevProducts) =>\\n prevProducts.map(product =>\\n product.id === id ? { ...product, isFavorite: !product.isFavorite } : product\\n )\\n );\\n}, []);\\n\\n
Now, the toggleFavorite
function uses the functional update pattern, ensuring that it always works with the latest state by accessing prevProducts
, which represents the state before the update. The empty dependency array ([]
) ensures that toggleFavorite
is created only once and does not change unless the component unmounts or re-renders, preventing unnecessary function re-creations:
With this optimization, the ProductItem
component now re-renders only for the specific product whose favorite status changes, significantly improving performance.
When creating a custom Hook, wrapping any returned functions with useCallback
is best practice to maintain a stable reference:
function useCart() {\\n const [cart, setCart] = useState([]);\\n\\n const addToCart = useCallback((item) => {\\n setCart((prevCart) => [...prevCart, item]);\\n }, []);\\n\\n const removeFromCart = useCallback((id) => {\\n setCart((prevCart) => prevCart.filter(item => item.id !== id));\\n }, []);\\n\\n return { cart, addToCart, removeFromCart };\\n}\\n\\n
By doing this, you allow components that use your Hook to avoid unnecessary re-renders and optimize performance when needed.
\\nWhile useCallback
is useful for performance optimization, there are cases where it is unnecessary. Here are two key scenarios where useCallback
is not needed:
memo
, then you don’t need useCallback
.useCallback
.useCallback, useMemo, useEffect, and useRef are all React hooks that help optimize performance, but they serve different purposes. Here’s a comparison of how each one works and when to use them:
\\nFeature | \\nuseCallback | \\nuseMemo | \\nuseEffect | \\nuseRef | \\n
---|---|---|---|---|
Purpose | \\nCaches a function to prevent re-creation on re-renders. | \\nCaches a computed value to avoid unnecessary recalculations. | \\nRuns side effects (API calls, subscriptions, DOM updates) after renders. | \\nStores a persistent reference without triggering re-renders. | \\n
Returns | \\nA cached function. | \\nA cached value. | \\nNothing (executes code after render). | \\nA mutable object {\'{ current: value }\'} . | \\n
Triggers Re-render? | \\nNo | \\nNo | \\nYes (when state changes) | \\nNo | \\n
In this article, we explored the useCallback
hook and how it optimizes app performance by preventing unnecessary re-renders. We demonstrated its use with real-world examples, discussed how to write more efficient custom hooks, and identified when useCallback
is truly needed versus when it is unnecessary.
Additionally, we compared useCallback
with related hooks like useMemo
, useRef
, and useEffect
, clarifying their use cases in React.
Now, you have a solid understanding of useCallback
and how to use it effectively to improve your React applications.
Would you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nThe virtual DOM is a fundamental React concept; if you’ve written any React code within the last few years, you’ve probably heard of it. However, you might not understand how it works and why React uses it.
\\nIn this article, we’ll define the characteristics of the virtual document object model (DOM), explore its benefits in React, and review a practical example. Let’s get started!
\\nThe virtual DOM is a lightweight, memory-based representation of the real DOM that enables React to update user interfaces by calculating the minimal set of changes needed. When a component’s state changes, React creates a new virtual DOM and then compares it with the previous version using a diffing algorithm.
\\nIt then updates only the parts of the real DOM that have actually changed. This makes for a better user experience because it reduces the number of direct manipulations to the browser’s DOM.
\\nEditor’s note: This post was updated in March 2025 by Muhammed Ali to include a clear and succinct definition of the virtual DOM, information on how the virtual DOM works, and the benefits/pitfalls of using the virtual DOM.
\\nThe virtual DOM works in three main steps:
\\nWhen a component’s state or props change, React re-renders the component to generate a new virtual DOM tree. This tree is a representation of the UI composed of plain JavaScript objects. It mirrors the structure of the actual DOM elements but omits browser details, making it quick to create and update without direct interaction with the real DOM.
\\nOnce the new virtual DOM tree is created, React performs a diffing process. It compares the new tree with the previous version to identify exactly which elements have changed. This comparison matches element types using keys for list items to quickly pinpoint modifications. By isolating only the differences, React significantly minimizes the number of updates needed.
\\nAfter determining the differences, React generates a set of minimal update instructions and applies these changes to the actual DOM. This process is known as patching.
\\nInstead of re-rendering the entire DOM tree, only the modified parts are updated. This selective updating reduces the performance overhead associated with full DOM updates. This will mean that the user interface remains fast even as the application grows in complexity.
\\nDOM operations are very fast, light operations. However, when the app data changes and triggers an update, re-rendering can be expensive.
\\nLet’s simulate re-rendering a page with the JavaScript code below:
\\nconst update = () => {\\n const element = `\\n <h3>JavaScript:</h3>\\n <form>\\n <input type=\\"text\\"/>\\n </form>\\n <span>Time: ${new Date().toLocaleTimeString()}</span>\\n `;\\n\\n document.getElementById(\\"root1\\").innerHTML = element;\\n};\\n\\nsetInterval(update, 1000);\\n\\n
You can find the complete code on CodeSandbox. The DOM tree representing the document looks like the following:
\\nThe setInterval()
callback in the code lets us trigger a simulated re-render of the UI after every second. As seen in the GIF below, the document DOM elements are rebuilt and repainted on each update. The text input in the UI also loses its state due to this re-rendering:
As seen above, the text field loses the input value when an update occurs in the UI, which calls for optimization.
\\nDifferent JavaScript frameworks offer different solutions and strategies to optimize re-rendering. However, React implements the concept of the virtual DOM.
\\nAs the name implies, the virtual DOM is a much lighter replica of the actual DOM in the form of objects. The virtual DOM can be saved in the browser memory and doesn’t directly change what is shown on the user’s browser. Implemented by several other frontend frameworks, like Vue, React’s declarative approach is unique.
\\n\\nHere’s what to consider when contemplating use of the virtual DOM in React:
\\nA common misconception is that the virtual DOM is faster than or rivals the actual DOM. However, this is untrue.
\\nIn fact, the virtual DOM’s operations support and add on to those of the actual DOM. Essentially, the virtual DOM provides a mechanism that allows the actual DOM to compute minimal DOM operations when re-rendering the UI.
\\nFor example, when an element in the real DOM is changed, the DOM will re-render the element and all of its children. When it comes to building complex web applications with a lot of interactivity and state changes, this approach is slow and inefficient.
\\nInstead, in the rendering process, React employs the concept of the virtual DOM, which conforms with its declarative approach. Therefore, we can specify what state we want the UI to be in, after which React makes it happen.
\\nAfter the virtual DOM is updated, React compares it to a snapshot of the virtual DOM taken just before the update, determines what element was changed, and then updates only that element on the real DOM. This is one method the virtual DOM employs to optimize performance. We’ll go into more detail later.
\\n\\nThe virtual DOM abstracts manual DOM manipulations away from the developer, helping us to write more predictable and unruffled code so that we can focus on creating components.
\\nThanks to the virtual DOM, you don’t have to worry about state transitions. Once you update the state, React ensures that the DOM matches that state. For instance, in our last example, React ensures that on every re-render, only Time
gets updated in the actual DOM. Therefore, we won’t lose the value of the input field while the UI update happens.
Let’s consider the following render code representing the React version of our previous JavaScript example:
\\n// ...\\nconst update = () => {\\n const element = (\\n <>\\n <h3>React:</h3>\\n <form>\\n <input type=\\"text\\" />\\n </form>\\n <span>Time: {new Date().toLocaleTimeString()}</span>\\n </>\\n );\\n root.render(element);\\n};\\n\\n
For brevity, we have removed some of the code. You can see the complete code on CodeSandbox. We can also write JSX code in plain React, as follows:
\\nconst element = React.createElement(\\n React.Fragment,\\n null,\\n React.createElement(\\"h3\\", null, \\"React:\\"),\\n React.createElement(\\n \\"form\\",\\n null,\\n React.createElement(\\"input\\", {\\n type: \\"text\\"\\n })\\n ),\\n React.createElement(\\"span\\", null, \\"Time: \\", new Date().toLocaleTimeString())\\n);\\n\\n
Keep in mind that you can get the React equivalent of JSX code by pasting the JSX elements in a Babel REPL editor.
\\nNow, if we log the React element in the console, we’ll end up with something like in the following image:
\\nconst element = (\\n <>\\n <h3>React:</h3>\\n <form>\\n <input type=\\"text\\" />\\n </form>\\n <span>Time: {new Date().toLocaleTimeString()}</span>\\n </>\\n );\\n console.log(element)\\n\\n
The object, as seen above, is the virtual DOM. It represents the user interface.
\\nTo understand the virtual DOM strategy, we need to understand the two major phases that are involved: rendering and reconciliation.
\\nWhen we render an application user interface, React creates a virtual DOM tree representing that UI and stores it in memory. On the next update, or in other words, when the data that renders the app changes, React will automatically create a new virtual DOM tree for the update.
\\nTo further explain this, we can visually represent the virtual DOM as follows:
\\nThe image on the left is the initial render. As the Time
changes, React creates a new tree with the updated node, as seen on the right side.
Remember, the virtual DOM is just an object representing the UI, so nothing gets drawn on the screen.
\\nAfter React creates the new virtual DOM tree, it compares it to the previous snapshot using a diffing algorithm called reconciliation to figure out what changes are necessary.
\\nAfter the reconciliation process, React uses a renderer library like ReactDOM, which takes the different information to update the rendered app. This library ensures that the actual DOM only receives and repaints the updated node or nodes:
\\nAs seen in the image above, only the node whose data changes gets repainted in the actual DOM. The GIF below further proves this statement:
\\nWhen a state change occurs in the UI, we’re not losing the input value.
\\nIn summary, on every render, React compares the virtual DOM tree with the previous version to determine which node gets updated, ensuring that the updated node matches up with the actual DOM.
\\nWhen React diffs two virtual DOM trees, it begins by comparing whether or not both snapshots have the same root element. If they have the same elements, like in our case, where the updated nodes are of the same span
element type, React moves on and recurses on the attributes.
In both snapshots, no attribute is present or updated on the span
element. React then repeats the procedure on the children. Upon seeing that the Time
text node has changed, React will only update the actual node in the real DOM.
On the other hand, if both snapshots have different element types, which is rare in most updates, React will destroy the old DOM nodes and build a new one. For instance, going from span
to div
, as shown in the respective code snippets below:
<span>Time: 04:36:35</span> \\n<div>Time: 04:36:38</div>\\n\\n
In the following example, we render a simple React component that updates the component state after a button click:
\\nimport { useState } from \\"react\\";\\n\\nconst App = () => {\\n const [open, setOpen] = useState(false);\\n\\n return (\\n <div className=\\"App\\">\\n <button onClick={() => setOpen((prev) => !prev)}>toggle</button>\\n <div className={open ? \\"open\\" : \\"close\\"}>\\n I\'m {open ? \\"opened\\" : \\"closed\\"}\\n </div>\\n </div>\\n );\\n};\\nexport default App;\\n\\n
Updating the component state re-renders the component. However, as shown below, on every re-render, React knows only to update the class name and the text that changed. This update will not hurt unaffected elements in the render:
\\nSee the code and demo on CodeSandbox.
\\nWhen we modify a list of items, how React diffs the list depends on whether the items are added at the beginning or the end of the list. Consider the following list:
\\n<ul> \\n <li>item 3</li>\\n <li>item 4</li>\\n <li>item 5</li>\\n</ul>\\n\\n
On the next update, let’s append an item 6
at the end, like so:
<ul> \\n <li>item 3</li>\\n <li>item 4</li>\\n <li>item 5</li>\\n <li>item 6</li>\\n</ul>\\n\\n
React compares the items from the top. It matches the first, second, and third items, and knows only to insert the last item. This computation is straightforward for React.
\\nHowever, let’s insert item 2
at the beginning, as follows:
<ul> \\n <li>item 2</li>\\n <li>item 3</li>\\n <li>item 4</li>\\n <li>item 5</li>\\n</ul>\\n\\n
Similarly, React compares from the top, and immediately realizes that item 3
doesn’t match item 2
of the updated tree. It therefore sees the list as an entirely new one that needs to be rebuilt.
Instead of rebuilding the entire list, we want the DOM to compute minimal operations by only prepending item 2
. React lets us add a key
prop to uniquely identify the items as follows:
<ul> \\n <li key=\\"3\\">item 3</li>\\n <li key=\\"4\\">item 4</li>\\n <li key=\\"5\\">item 5</li>\\n</ul>\\n\\n<ul> \\n <li key=\\"2\\">item 2</li>\\n <li key=\\"3\\">item 3</li>\\n <li key=\\"4\\">item 4</li>\\n <li key=\\"5\\">item 5</li>\\n <li key=\\"6\\">item 6</li>\\n</ul>\\n\\n
With the implementation above, React would know that we have prepended item 2
and appended item 6
. As a result, it would work to preserve the items that are already available and add only the new items in the DOM.
If we omit the key
prop whenever we map to render a list of items, React is kind enough to alert us in the browser console.
Before we wrap up, here’s a question that often comes up. Is the shadow DOM the same as the virtual DOM? The short answer is that their behavior is different.
\\nThe shadow DOM is a tool for implementing web components. Take, for instance, the HTML input
element range
:
<input type=\\"range\\" />\\n\\n
This gives us the following result:
\\nIf we inspect the element using the browser’s developer tools, we’ll see only a simple input
element. However, internally, browsers encapsulate and hide other elements and styles that make up the input slider.
Using Chrome DevTools, we can enable the Show user agent shadow DOM
option from Settings
to see the shadow DOM:
In the image above, the structured tree of elements from the #shadow-root
inside the input
element is called the shadow DOM tree. It provides a way to isolate components, including styles from the actual DOM.
Therefore, we’re sure that a widget or component’s style, like the input
range above, is preserved no matter where it is rendered. In other words, their behavior or appearance is never affected by other elements’ styles from the real DOM.
The table below summarizes the differences between the real DOM, the virtual DOM, and the shadow DOM:
\\n\\n | Real DOM | \\nVirtual DOM | \\nShadow DOM | \\n
---|---|---|---|
Description | \\nAn interface for web documents; allows scripts to interact with the document | \\nAn in-memory replica of the actual DOM | \\nA tool for implementing web components, or an isolated DOM tree within an actual DOM for scoping purposes | \\n
Relevance to developers | \\nDevelopers manually perform DOM operations to manipulate the DOM | \\nDevelopers don’t have to worry about state transitions; the virtual DOM abstracts DOM manipulation away from the developer | \\nDevelopers can create reusable web components without worrying about style conflicts from the hosting document | \\n
Who uses them | \\nImplemented in browsers | \\nUsed by libraries and frameworks like React, Vue, etc. | \\nUsed by web components | \\n
Project complexity | \\nSuitable for simple, small to medium-scale projects without complex interactivity | \\nSuitable for complex projects with a high level of interactivity | \\nSuitable for simple to medium-scale projects with less complex interactivity | \\n
CPU and memory usage | \\nWhen compared to virtual DOM updates, real DOM uses less CPU and memory | \\nWhen compared to real DOM updates, virtual DOM uses more CPU and memory | \\nWhen compared to virtual DOM updates, shadow DOM uses less CPU and memory | \\n
Encapsulation | \\nDoes not support encapsulation since components can be modified outside of its scope | \\nSupports encapsulation as components cannot be modified outside of its scope | \\nSupports encapsulation as components cannot be modified outside of its scope | \\n
React uses the virtual DOM as a strategy to compute minimal DOM operations when re-rendering the UI. It is not in rivalry with or faster than the real DOM.
\\nThe virtual DOM provides a mechanism that abstracts manual DOM manipulations away from the developer, helping us to write more predictable code. It does so by comparing two render trees to determine exactly what has changed, only updating what is necessary on the actual DOM.
\\nLike React, Vue also employs this strategy. However, Svelte proposes another approach to ensure that an application is optimized, compiling all components into independent, tiny JavaScript modules, making the script very light and fast to run.
\\nI hope you enjoyed reading this article. Be sure to share your thoughts in the comment section if you have questions or contributions.
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nswitch
statements\\n switch
statements\\n switch
statements\\n if...else
: Which should you use?\\n switch
statements\\n It’s easy to feel overwhelmed by the different ways to approach decision-making in your JavaScript code, especially when working with conditional logic. Like everyone else, you might start with if...else
statements.
But as your logic grows more complex, you’ll discover the switch
statement in JavaScript is simply better for handling multiple conditions in a clean and readable way.
In this article, I’ll walk you through the ins and outs of switch
statements: the basics of syntax, typical use cases like mapping values, a deep dive into the often-confusing fallthrough behavior, and how to manage it effectively with or without break
statements.
A switch
statement is a control flow mechanism that enables your program to execute different blocks of code based on the value of a given expression. You can think of switch
statements as a traffic controller; you provide a value, and the switch
statement efficiently directs the execution flow to the appropriate code block, handling multiple potential conditions with ease.
This structure is particularly beneficial when you’re dealing with numerous specific cases, as it can make your code cleaner and more organized compared to a long chain of if...else
statements.
switch
statementsAt its core, the switch
case in JS evaluates an expression once and then compares that result against a series of defined cases. Each case corresponds to a potential match, and the associated block of code is executed when a match is found. If none of the cases match, an optional default
block can be used to handle any unexpected values or conditions.
Here’s the basic structure of a switch
statement:
switch (expression) {\\n case value1:\\n // Code to execute if expression === value1\\n break;\\n case value2:\\n // Code to execute if expression === value2\\n break;\\n default:\\n // Code to execute if no cases match\\n}\\n
The key elements of a switch
statement are:
expression
– The value or condition you’re evaluatingcase
– Each case checks if the expression matches a specific valuebreak
– Stops the execution of the switch block once a match is found. Without it, the code will “fall through” to the next case (more on this later)default
– Optionally handles any situation where no case matchesswitch
statementsOne common scenario for using a switch
case in JS is when you need to map a set of input values to corresponding outputs or actions. For example, consider a function that handles user input or managing state in a game:
let direction = \\"left\\";\\n\\nswitch (direction) {\\n case \\"up\\":\\n console.log(\\"Moving up\\");\\n break;\\n case \\"down\\":\\n console.log(\\"Moving down\\");\\n break;\\n case \\"left\\":\\n console.log(\\"Moving left\\");\\n break;\\n case \\"right\\":\\n console.log(\\"Moving right\\");\\n break;\\n default:\\n console.log(\\"Invalid direction\\");\\n}\\n
By separating each possible direction
into its case, the intent of the code is immediately obvious. It’s easy to add or remove cases without disrupting the overall structure.
if...else
statementsIf you’ve ever written a bunch of if...else
checks, you know how messy they can get. For example:
let day = \\"Monday\\";\\n\\nif (day === \\"Monday\\") {\\n console.log(\\"Start of the workweek\\");\\n} else if (day === \\"Friday\\") {\\n console.log(\\"Almost the weekend!\\");\\n} else if (day === \\"Saturday\\" || day === \\"Sunday\\") {\\n console.log(\\"Weekend vibes!\\");\\n} else {\\n console.log(\\"Midweek days\\");\\n}\\n
A switch
can tidy this up nicely:
// Equivalent switch statement\\nswitch (day) {\\n case \\"Monday\\":\\n console.log(\\"Start of the workweek\\");\\n break;\\n case \\"Friday\\":\\n console.log(\\"Almost the weekend!\\");\\n break;\\n case \\"Saturday\\":\\n case \\"Sunday\\":\\n console.log(\\"Weekend vibes!\\");\\n break;\\n default:\\n console.log(\\"Midweek days\\");\\n}\\n
As you can see, the switch statement groups the different conditions under one variable, making the logic clearer and easier to read than several if...else
blocks. It also handles multiple cases (like “Saturday” and “Sunday”) neatly by allowing them to share the same output without repeating code.
switch
statementsOne of the most notable quirks of a switch
statement is fallthrough. If you don’t include a break
(or another exit statement) in a switch
statement, the code will continue to the next case — even if it doesn’t match.
While this can be a handy shortcut in some scenarios, it can also lead to unintended results if you forget to use break
where it’s needed.
Let’s look at an example of fallthrough behavior being useful. In the following example, we want the same code to run for grades A
, B
, and C
. By omitting break
until after C
, we conveniently group these cases:
let grade = \\"A\\";\\n\\nswitch (grade) {\\n case \\"A\\":\\n case \\"B\\":\\n case \\"C\\":\\n console.log(\\"You passed!\\");\\n break;\\n case \\"D\\":\\n console.log(\\"You barely passed...\\");\\n break;\\n case \\"F\\":\\n console.log(\\"You failed.\\");\\n break;\\n default:\\n console.log(\\"Invalid grade\\");\\n}\\n
Here, if grade
is A
, B
, or C
, the same block of code will execute. The same message — You passed!
— is displayed. This is a clean way to group multiple cases.
On the flip side, here’s an example of unintentional fallthroughs causing unexpected behaviors:
\\nlet fruit = \\"apple\\";\\n\\nswitch (fruit) {\\n case \\"apple\\":\\n console.log(\\"Apples are $0.50\\");\\n case \\"orange\\":\\n console.log(\\"Oranges are $0.75\\");\\n break;\\n case \\"banana\\":\\n console.log(\\"Bananas are $0.25\\");\\n break;\\n default:\\n console.log(\\"Invalid fruit\\");\\n}\\n\\n// Output:\\n// Apples are $0.50\\n// Oranges are $0.75\\n
Since there’s no break
after the apple
case, the code “falls through” and executes the orange
case as well, even though fruit is \\"apple\\"
. This might not be what you intended!
if...else
: Which should you use?The choice between a switch
statement and an if...else
chain often depends on the complexity of the conditions you’re evaluating. Here’s a table comparing them in detail:
Aspect | \\nswitch statement | \\nif...else statement | \\n
Use case | \\nBest when you are comparing the same expression against multiple specific values | \\nBest when conditions are complex, involve ranges, or require evaluation of different expressions | \\n
Condition complexity | \\nIdeal for simple equality checks | \\nMore flexible, accommodating complex conditions such as ranges or compound logic (e.g., if (x > 10 && x < 20) | \\n
Code readability | \\nOffers a cleaner, more organized structure when dealing with many fixed-value cases, making the code easier to follow | \\nCan become harder to read when multiple conditions are chained together, especially with nested or compound conditions | \\n
Evaluation | \\nEvaluates a single expression once and compares it against defined cases | \\nEvaluates each condition independently, which can be beneficial if different expressions need to be checked | \\n
Default behavior | \\nIncludes an optional default case to handle unmatched values, providing a clear fallback | \\nTypically uses a final else block to catch any conditions that aren’t met by the preceding if and else if statements | \\n
Flexibility | \\nLimited to checking the equality of a single expression | \\nHighly flexible, allowing for a broader range of logical conditions and comparisons beyond mere equality | \\n
Although switch
statements can be handy in many cases, there are other elegant JavaScript alternatives, like object literals. Object literals can simplify your code, reduce the risk of bugs caused by fallthrough, and make it easier to add or remove actions down the road. Here’s an example:
const actions = {\\n play: () => console.log(\\"Playing the music\\"),\\n pause: () => console.log(\\"Pausing the music\\"),\\n stop: () => console.log(\\"Stopping the music\\"),\\n rewind: () => console.log(\\"Rewinding the music\\"),\\n};\\n\\nlet command = \\"play\\";\\n\\nactions[command] ? actions[command]() : console.log(\\"Invalid command\\");\\n
This approach is more concise and avoids the pitfalls of fallthrough. It’s also easier to extend; just add a new key-value pair to the object. If we consider the earlier example of mapping day numbers to day names, it can be rewritten with object literals as:
\\nfunction getDayName(dayNumber) {\\n const dayNames = {\\n 0: \'Sunday\',\\n 1: \'Monday\',\\n 2: \'Tuesday\',\\n 3: \'Wednesday\',\\n 4: \'Thursday\',\\n 5: \'Friday\',\\n 6: \'Saturday\'\\n };\\n return dayNames[dayNumber] || \'Invalid day\';\\n}\\n\\nconsole.log(getDayName(3)); // Outputs: Wednesday\\n
This method is not only shorter but also scales well when you simply need to map a set of keys to values. However, note that object literals are best used when dealing with direct mappings rather than when executing complex logic for each case.
\\nswitch
statementsThe following tactics will help you deploy switch
statements to your advantage:
default
caseA default
case ensures that your code can handle unexpected or unrecognized values gracefully. This acts as a safety net; if none of the specified cases match, your program can still execute a fallback code block.
break
to avoid fallthroughUnless you intentionally want the execution to fall through to the next case, always include a break statement at the end of each case. This prevents unintentional behavior and makes your code logic clearer.
\\nIf your switch
statement grows too long or complex, consider refactoring your code into smaller, manageable functions or using an object literal. This improves readability and makes your code easier to maintain and debug.
Choose meaningful and descriptive values in your case statements. This self-documenting approach makes it easier for anyone reading the code to understand the purpose behind each case, ultimately leading to better maintainability.
\\nOn the flip side, avoid these common switch case pitfalls:
\\nbreak
statementAs mentioned earlier, forgetting to include a break statement can lead to unintended fallthrough. Always double-check your switch
statements to ensure each case ends with a break (unless a fallthrough is intentional).
switch
statementsWhile switch
statements are useful, they’re not always the best choice. If you find yourself writing a switch
statement with dozens of cases, consider refactoring your code. Object literals or even a series of if...else
statements might be more appropriate.
default
caseThe default
case is your safety net. It ensures that your code handles unexpected values gracefully. Always include a default
case, even if it’s just to log an error message.
The switch
case in JS is a valuable addition to your JavaScript toolkit, offering a streamlined and organized way to handle multiple conditions. By using a switch
statement, you can simplify your code structure, especially when mapping values to specific actions or replacing cumbersome if...else
chains.
Remember, while switch
statements can make your code cleaner, they require careful handling of fallthrough and consistent use of break and default
cases to prevent unintended behavior.
Would you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nReact.memo
?\\n React.memo
\\n PureComponent
in class components\\n React.memo
to useMemo
, useCallback
, and PureComponent
\\n React components re-render whenever their parent updates, even if their props remain unchanged. This can cause performance issues, especially with large datasets or complex UI updates. React.memo helps optimize performance by memoizing components and preventing unnecessary re-renders. But does it always work?
\\nIn this guide, you’ll learn when to use React.memo
— and when to avoid it.
React.memo
, and how does it work?React.memo
is a higher-order component (HOC) that memoizes functional components, preventing them from re-rendering unless their props change.
React.memo
improve performance?Yes, it helps optimize performance by skipping unnecessary re-renders. However, it should be used only when performance issues arise, as unnecessary memoization can add complexity.
\\nReact.memo
?Use React.memo
when:
React.memo
component still re-rendering?Even with React.memo
, a component will still re-render if:
useEffect
or subscriptions that trigger updatesReact.memo
vs.
useMemo
vs. useCallback
— what’s the difference?Memoization is a performance optimization technique that caches the result of a function and returns the cached value for subsequent calls with the same inputs.
\\nIn React, memoization helps prevent unnecessary re-renders of components handling large datasets, resource-heavy operations, or expensive calculations.
\\nReact.memo
?React.memo
is a React API that caches functional components on the first render and returns the cached component as long as the props remain unchanged. If the props change, the component re-renders.
Under the hood, React.memo
uses Object.is
for a shallow comparison of the previous and new props. If they are identical, the cached version is returned; otherwise, the component re-renders.
Let’s consider an ecommerce case study where a product detail page displays reviews (a review component) for each product. The review component may re-render when unrelated parts of the product page update. This happens because, by default, React re-renders a child component whenever the parent component state changes.
\\nCreate a ProductDetailPage.jsx
component in your React project and add the following:
//ProductDetailPage.jsx\\n import React, { useState } from \\"react\\";\\n \\n const ProductDetailPage = () => {\\n const [cartCount, setCartCount] = useState(0);\\n const handleAddToCart = () => {\\n setCartCount(cartCount + 1);\\n alert(\\"Product added to cart!\\");\\n };\\n return (\\n <div className=\\"max-w-4xl mx-auto p-6\\">\\n {/* Product Section */}\\n <div className=\\"grid grid-cols-1 md:grid-cols-2 gap-8\\">\\n {/* Product Image */}\\n <div>\\n <img\\n src=\\"https://res.cloudinary.com/muhammederdem/image/upload/q_60/v1536405217/starwars/item-2.webp\\"\\n alt=\\"Product\\"\\n className=\\"w-full h-auto rounded-lg shadow-md\\"\\n />\\n </div>\\n {/* Product Details */}\\n <div>\\n <h1 className=\\"text-2xl font-bold mb-4\\">Awesome Product</h1>\\n <p className=\\"text-gray-700 mb-4\\">\\n This is a detailed description of the awesome product. It has\\n amazing features and great quality.\\n </p>\\n <p className=\\"text-xl font-semibold mb-4\\">$49.99</p>\\n <button\\n onClick={handleAddToCart}\\n className=\\"bg-blue-600 text-white px-6 py-2 rounded-lg hover:bg-blue-700 transition\\"\\n >\\n Add to Cart\\n </button>\\n <p className=\\"mt-2 text-sm text-gray-500\\">Cart Count: {cartCount}</p>\\n </div>\\n </div>\\n {/* Review Section */}\\n <ProductReview />\\n </div>\\n );\\n };\\n const ProductReview = () => {\\n const reviews = [\\n { id: 1, author: \\"John Doe\\", rating: 5, comment: \\"Amazing product!\\" },\\n { id: 2, author: \\"Jane Smith\\", rating: 4, comment: \\"Very good quality.\\" },\\n {\\n id: 3,\\n author: \\"Alex Johnson\\",\\n rating: 3,\\n comment: \\"It\'s decent for the price.\\",\\n },\\n ];\\n console.log(\\"ProductReview was rendered at\\", new Date().toLocaleTimeString());\\n return (\\n <div className=\\"mt-10\\">\\n <h2 className=\\"text-xl font-bold mb-4\\">Customer Reviews</h2>\\n <div className=\\"space-y-4\\">\\n {reviews.map((review) => (\\n <div className=\\"p-4 border rounded-lg bg-gray-50 shadow-sm\\">\\n <p className=\\"font-semibold\\">{review.author}</p>\\n <p className=\\"text-yellow-500\\">{\\"⭐\\".repeat(review.rating)}</p>\\n <p className=\\"text-gray-600\\">{review.comment}</p>\\n </div>\\n ))}\\n </div>\\n </div>\\n );\\n };\\n export default ProductDetailPage;\\n\\n
The ProductDetailPage
component is the parent component of the ProductReview
component. It maintains a cartCount
state to track how many times the user clicks the “Add to Cart” button, updating the count and displaying a confirmation alert. The ProductReview
component renders a list of customer reviews and logs the render time to the console for tracking performance.
Running the project should result in the following:
\\nNotice how the ProductReview
component re-renders each time the user adds the product to the cart. This re-render is unnecessary because updating the cart does not impact the customer reviews.
Since the number of reviews in the ProductReview
component is small, the performance impact is negligible. However, let’s examine the effect when the component handles thousands of reviews. Update the reviews
array as follows:
const reviews = Array.from({ length: 10000 }, () => ({ id: Math.random(), author: \\"John Doe\\", rating: 5, comment: \\"Amazing product!\\" }));\\n\\n
At this point, adding the product to the cart introduces a noticeable lag. If you click the Add to Cart button multiple times in quick succession, the entire page may freeze. This happens because React re-renders all 10,000 reviews each time the state updates, significantly slowing down the application.
\\nClearly, excessive re-renders negatively impact performance. We can optimize this behavior by using React.memo
to prevent unnecessary re-renders of the ProductReview
component.
React.memo
To optimize performance, wrap the ProductReview
component with React.memo
as follows:
import React, { memo} from \\"react\\";\\n\\nconst ProductReview = memo(() => {\\n const reviews = Array.from({ length: 10000 }, () => ({ id: Math.random(), author: \\"John Doe\\", rating: 5, comment: \\"Amazing product!\\" }));\\n console.log(\\"ProductReview was rendered at\\", new Date().toLocaleTimeString());\\n return (\\n <div className=\\"mt-10\\">\\n <h2 className=\\"text-xl font-bold mb-4\\">Customer Reviews</h2>\\n <div className=\\"space-y-4\\">\\n {reviews.map((review) => (\\n <div key={review.id} className=\\"p-4 border rounded-lg bg-gray-50 shadow-sm\\">\\n <p className=\\"font-semibold\\">{review.author}</p>\\n <p className=\\"text-yellow-500\\">{\\"⭐\\".repeat(review.rating)}</p>\\n <p className=\\"text-gray-600\\">{review.comment}</p>\\n </div>\\n ))}\\n </div>\\n </div>\\n );\\n});\\n\\nexport default ProductDetailPage;\\n\\n
With this modification, ProductReview
will only re-render when its props change, effectively preventing unnecessary re-renders when updating the cart. This simple adjustment significantly improves the application’s responsiveness and ensures better performance:
React.memo
can be used to optimize re-renders by ensuring a component updates only when its props change. Let’s demonstrate this by adding a feature that allows users to change the text color of both the product name and the review header.
ProductDetailPage
Modify the ProductDetailPage
component to include a dropdown that allows users to select a text color:
const ProductDetailPage = () => {\\n const [cartCount, setCartCount] = useState(0);\\n const [color, setColor] = useState(\\"\\");\\n\\n const handleChange = (event) => {\\n setColor(event.target.value);\\n };\\n\\n const handleAddToCart = () => {\\n setCartCount(cartCount + 1);\\n alert(\\"Product added to cart!\\");\\n };\\n\\n return (\\n <div className=\\"max-w-4xl mx-auto p-6\\">\\n {/* Product Section */}\\n <div className=\\"grid grid-cols-1 md:grid-cols-2 gap-8\\">\\n {/* Product Image */}\\n {/* Product Details */}\\n <div>\\n <h1 className={`text-2xl font-bold mb-4 ${color}`}>\\n Awesome Product\\n </h1>\\n <p className=\\"text-gray-700 mb-4\\">\\n This is a detailed description of the awesome product. It has\\n amazing features and great quality.\\n </p>\\n <p className=\\"text-xl font-semibold mb-4\\">$49.99</p>\\n <button\\n onClick={handleAddToCart}\\n className=\\"bg-blue-600 text-white px-6 py-2 rounded-lg hover:bg-blue-700 transition\\"\\n >\\n Add to Cart\\n </button>\\n <p className=\\"my-3 text-sm text-gray-500\\">Cart Count: {cartCount}</p>\\n <div>\\n <p className=\\"font-medium\\">Change Text Color</p>\\n <select\\n value={color}\\n onChange={handleChange}\\n className=\\"border rounded-lg p-2 bg-white shadow-sm focus:outline-none focus:ring-2 focus:ring-blue-500\\"\\n >\\n <option value=\\"\\" disabled>\\n -- Choose an option --\\n </option>\\n <option value=\\"text-blue-700\\">Blue</option>\\n <option value=\\"text-red-700\\">Red</option>\\n <option value=\\"text-green-700\\">Green</option>\\n </select>\\n </div>\\n </div>\\n </div>\\n {/* Review Section */}\\n <ProductReview color={color} />\\n </div>\\n );\\n};\\n\\n
Here, we’ve added a dropdown select to choose different text colors and pass the color
prop to the ProductReview
component.
To access the color
prop, make the following changes to the ProductReview
component:
const ProductReview = memo(({color}) => {\\n const reviews = Array.from({ length: 10000 }, () => ({ id: Math.random(), author: \\"John Doe\\", rating: 5, comment: \\"Amazing product!\\" }));\\n console.log(\\"ProductReview was rendered at\\", new Date().toLocaleTimeString());\\n\\n return (\\n <div className=\\"mt-10\\">\\n <h2 className={`text-xl font-bold mb-4 ${color}`}>Customer Reviews</h2>\\n <div className=\\"space-y-4\\">\\n {reviews.map((review) => (\\n <div key={review.id} className=\\"p-4 border rounded-lg bg-gray-50 shadow-sm\\">\\n <p className=\\"font-semibold\\">{review.author}</p>\\n <p className=\\"text-yellow-500\\">{\\"⭐\\".repeat(review.rating)}</p>\\n <p className=\\"text-gray-600\\">{review.comment}</p>\\n </div>\\n ))}\\n </div>\\n </div>\\n );\\n});\\n\\n
With these modifications:
\\nProductReview
component will re-render only when the color
prop changesProductReview
componentThis ensures better performance and prevents unnecessary renders, improving the responsiveness of the application:
\\nDid you notice the lag in the UI after choosing a text color? This happens because when the color
prop changes, thousands of reviews are regenerated using Array.from()
and then re-rendered. To fix this issue, we’ll use the useMemo
hook to cache the result of the reviews
computation.
ProductReview
with useMemo
const ProductReview = memo(({ color }) => {\\n const reviews = useMemo(() =>\\n Array.from({ length: 10000 }, () => ({\\n id: Math.random(),\\n author: \\"John Doe\\",\\n rating: 5,\\n comment: \\"Amazing product!\\"\\n })), []);\\n\\n console.log(\\"ProductReview was rendered at\\", new Date().toLocaleTimeString());\\n \\n return (\\n <div className=\\"mt-10\\">\\n <h2 className={`text-xl font-bold mb-4 ${color}`}>Customer Reviews</h2>\\n <div className=\\"space-y-4\\">\\n {reviews.map((review) => (\\n <div key={review.id} className=\\"p-4 border rounded-lg bg-gray-50 shadow-sm\\">\\n <p className=\\"font-semibold\\">{review.author}</p>\\n <p className=\\"text-yellow-500\\">{\\"⭐\\".repeat(review.rating)}</p>\\n <p className=\\"text-gray-600\\">{review.comment}</p>\\n </div>\\n ))}\\n </div>\\n </div>\\n );\\n});\\n\\n
With this modification, the reviews
array is computed only on the first render, significantly improving performance.
useMemo
const [size, setSize] = useState(10);\\nconst reviews = useMemo(() =>\\n Array.from({ length: size }, () => ({\\n id: Math.random(),\\n author: \\"John Doe\\",\\n rating: 5,\\n comment: \\"Amazing product!\\"\\n })), [size]);\\n\\n
Now, useMemo
will recompute reviews
each time the size
dependency changes.
By default, function definitions in React components change on every re-render. If a function is passed as a prop to a memoized component, memoization will not work unless the function itself is memoized.
\\nexport default function Cart({ orderId }) {\\n function handleCheckout(orderDetails) {\\n post(\'/checkout/\' + orderId + \'/buy\', {\\n orderDetails\\n });\\n }\\n\\n return <Checkout onSubmit={handleCheckout} />;\\n}\\n\\n
In this case, handleCheckout
is re-created every render, causing unnecessary re-renders in Checkout
.
useMemo
export default function Cart({ orderId }) {\\n const handleCheckout = useMemo(() => (orderDetails) => {\\n post(\'/checkout/\' + orderId + \'/buy\', {\\n orderDetails\\n });\\n }, [orderId]);\\n\\n return <Checkout onSubmit={handleCheckout} />;\\n}\\n\\n
A cleaner approach is to use useCallback
.
useCallback
const handleCheckout = useCallback(\\n (orderDetails) => {\\n post(\'/checkout/\' + orderId + \'/buy\', {\\n orderDetails\\n });\\n }, [orderId]);\\n\\n
Similar to functions, array and object definitions change on every re-render. If an object or array is passed as props to a memoized component, memoization will not work unless they are also memoized.
\\nconst paymentOptions = useMemo(() => ({\\n paymentMode: \'credit-card\',\\n amount: amount\\n}), [amount]);\\n\\n
const cardTypes = useMemo(() => [\'credit-card\', \'debit-card\'], []);\\n\\n
By memoizing objects and arrays, we ensure that components relying on them do not re-render unnecessarily, leading to better performance and efficiency in React applications.
\\nJust like the function definition, every array and object definition in a React component changes on every rerender and if an object or array is passed as props to a memoized component, memoization will not work.
\\nHere is how to memoize an object:
\\nconst paymentOptions = useMemo(() => {\\n return {\\n paymentMode: \'credit-card\',\\n amount: amount\\n };\\n}, [amount]);\\n\\n
Here is how to memoize an array:
\\nconst cardTypes = useMemo(() => {\\n return [\'credit-card\', \'debit-card\'];\\n}, []);\\n\\n
By default, React re-renders components when there’s a change in context. This also applies to memoized components, as demonstrated in the following code snippet:
\\nimport React, { createContext, useContext, useState, memo } from \'react\';\\nconst UserContext = createContext();\\n\\nexport const App = () => {\\n const [user, setUser] = useState({ name: \'John\', age: 30 });\\n return (\\n <UserContext.Provider value={user}>\\n <UserName />\\n <p style={{ color: \'white\' }}>Age: {user.age}</p>\\n <button onClick={() => setUser({ ...user, age: user.age + 1 })}>\\n Increment Age\\n </button>\\n </UserContext.Provider>\\n );\\n};\\n\\nconst UserName = memo(() => {\\n const { name } = useContext(UserContext);\\n console.log(\'UserName rendered\');\\n return <div style={{ color: \'white\' }}>User Name: {name}</div>;\\n});\\n\\n
Each time the user’s age
value changes, the memoized UserName
component still re-renders, even though the name
value remains unchanged. To prevent this, pass only the required part of the context as a prop to the memoized component.
import React, { createContext, useContext, useState, memo } from \'react\';\\nconst UserContext = createContext();\\n\\nexport const App = () => {\\n const [user, setUser] = useState({ name: \'John\', age: 30 });\\n return (\\n <UserContext.Provider value={user}>\\n <UserDisplay />\\n <p style={{ color: \'white\' }}>Age: {user.age}</p>\\n <button onClick={() => setUser({ ...user, age: user.age + 1 })}>\\n Increment Age\\n </button>\\n </UserContext.Provider>\\n );\\n};\\n\\nconst UserDisplay = () => {\\n const { name } = useContext(UserContext);\\n return <UserName name={name} />;\\n};\\n\\nconst UserName = memo(({ name }) => {\\n console.log(\'UserName rendered\');\\n return <div style={{ color: \'white\' }}>User Name: {name}</div>;\\n});\\n\\n
Avoid using memoization:
\\nuseEffect
Memoizing values used inside useEffect
is unnecessary. Instead, move the value inside the effect:
useEffect(() => {\\n const options = {\\n serverUrl: \'https://localhost:1234\',\\n roomId: roomId\\n };\\n\\n const connection = createConnection(options);\\n connection.connect();\\n return () => connection.disconnect();\\n}, [roomId]);\\n\\n
Wrapping JSX nodes in useMemo
prevents conditional rendering:
const children = useMemo(() => <ProductList items={products} />, [products]);\\n\\n
PureComponent
in class componentsFor class-based components, React.memo
, useMemo
, and useCallback
will not work. Instead, use PureComponent
, which re-renders only when its props change:
class PureComponentExample extends React.PureComponent {\\n render() {\\n console.log(\'Pure Component rendered\');\\n return <div>{this.props.value}</div>;\\n }\\n}\\n\\n
PureComponent
If you pass objects or arrays that mutate without changing their reference, PureComponent
may not detect changes:
import React, { PureComponent } from \'react\';\\n\\nclass ChildComponent extends PureComponent {\\n render() {\\n console.log(\'ChildComponent rendered\');\\n return <div>Message: {this.props.data.message}</div>;\\n }\\n}\\n\\nexport default class App extends React.Component {\\n state = {\\n data: { message: \'Hello\' }\\n };\\n\\n updateMessage = () => {\\n const { data } = this.state;\\n data.message = \'Hello, World!\'; // Mutating the state object\\n this.setState({ data }); // Setting the same object reference\\n };\\n\\n render() {\\n return (\\n <div>\\n <ChildComponent data={this.state.data} />\\n <button onClick={this.updateMessage}>Update Message</button>\\n </div>\\n );\\n }\\n}\\n\\n
Because this.setState({ data })
does not create a new object reference, PureComponent
cannot detect the update, and ChildComponent
will not re-render. To fix this, always create a new reference:
updateMessage = () => {\\n this.setState({ data: { ...this.state.data, message: \'Hello, World!\' } });\\n};\\n\\n
React.memo
to useMemo
, useCallback
, and PureComponent
The following table will help you know when to use React.memo
, useMemo
, useCallback
.
Feature | \\nReact.memo | \\nuseMemo | \\nuseCallback | \\nPureComponent | \\n
---|---|---|---|---|
Usage | \\nPrevent component re-renders | \\nAvoid recalculating values | \\nAvoid recreating functions | \\nAvoid re-rendering in class components | \\n
Scope | \\nComponent-level | \\nValue-level | \\nFunction-level | \\nClass components only | \\n
In this article, we explored how React.memo
can improve app performance by skipping unnecessary re-renders. We covered real-world use cases, best practices, and when to avoid memoization. Additionally, we compared React.memo
with useMemo
, useCallback
, and PureComponent
to understand their specific use cases.
By applying these techniques wisely, you can significantly enhance the efficiency of your React applications!
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\ndestroy()
for cleanup\\n Imagine trying to drink from a firehose: overwhelming and chaotic. Now, think of sipping water from a glass: controlled and efficient. That’s exactly how Node.js readable streams handle data: processing it in small chunks instead of overwhelming our application.
\\nNode.js’s streaming architecture is key to its high performance. In this guide, we’ll dive into Node.js readable streams — the pipelines that bring data into our application. We’ll explore how to work with them, build our application with composable stream components, and handle errors gracefully.
\\nLet’s get started!
\\nThere are four primary types of Node.js streams, each serving a specific purpose:
\\nStream type | \\nRole | \\nCommon use cases | \\n
---|---|---|
Readable streams | \\nFetch data from a source | \\nFiles, HTTP requests, user input | \\n
Writable streams | \\nSend data to a destination | \\nFiles, HTTP responses | \\n
Duplex streams | \\nBidirectional data flow | \\nTCP sockets, WebSocket connections | \\n
Transform streams | \\nA subtype of duplex streams that modifies data as it flows through | \\nCompression, encryption, parsing | \\n
In this article, we’ll focus on readable streams.
\\nNode.js readable streams act as data sources, allowing us to consume information from files, network requests, and user input. By processing data in small, manageable chunks, they prevent memory overload and enable scalable, real-time data handling.
\\nWe can create readable streams using the stream.Readable
class or its specialized implementations.
Common readable stream implementations include:
\\nfs.createReadStream
: For streaming data from files on disk, it is particularly useful for handling large datasets.http.IncomingMessage
: Handle incoming HTTP request bodies, commonly used in Express/Node.js serversprocess.stdin
: Capture real-time user input from the command lineHere is an example of reading the contents of the input.txt
file using a readable stream:
const fs = require(\\"fs\\");\\n// Create a readable stream from a file\\nconst readStream = fs.createReadStream(\\"input.txt\\", { encoding: \\"utf-8\\" });\\n\\n
While Node.js provides built-in readable streams, there are times when we need to generate or adapt data in a custom way. Custom readable streams are suitable for:
\\nBelow is an example of a custom readable stream. We extend the Readable
class and implement the _read()
method:
const { Readable } = require(\'stream\');\\n\\n// 1. Extend the Readable class\\nclass HelloWorldStream extends Readable {\\n // 2. Implement the _read() method\\n _read(size) {\\n // Push data incrementally\\n this.push(\'Hello, \'); // First chunk\\n this.push(\'world!\'); // Second chunk\\n\\n // Signal end of data by pushing `null`\\n this.push(null); \\n }\\n}\\n\\n// 3. Instantiate and consume the stream\\nconst helloWorld = new HelloWorldStream();\\nhelloWorld.on(\'data\', (chunk) => {\\n console.log(\'Received chunk:\', chunk.toString());\\n});\\n\\n// Output:\\n// Received chunk: Hello, \\n// Received chunk: world!\\n\\n
As the code above shows, we can control how data is generated and chunked using the custom readable stream.
\\nReadable streams in Node.js are event-driven, allowing us to handle key stages of the data lifecycle. By listening to specific events, we can process data chunks, react to errors, and detect when the stream has completed or closed.
\\nKey events in a readable stream include:
\\ndata
: Emitted when a chunk of data is available to be readreadable
: Emitted when data is available to be readend
: Emitted when there is no more data to be readerror
: Emitted if an error occurs (e.g., file not found or permission issues)close
: Emitted when the stream and any underlying resources are closedIn the following example, we set up event listeners to read a file, log the chunks, handle potential errors, and log a message upon completion and stream closure:
\\nconst fs = require(\'fs\');\\nconst inputFilePath = \'example.txt\';\\n// Create a readable stream\\nconst readStream = fs.createReadStream(inputFilePath, { encoding: \'utf-8\' });\\n\\n// Listen for \'data\' events to process chunks\\nreadStream.on(\'data\', (chunk) => {\\n console.log(\'Received chunk:\', chunk);\\n});\\n\\n// Listen for \'end\' to detect when reading is complete\\nreadStream.on(\'end\', () => {\\n console.log(\'Finished reading the file.\');\\n});\\n\\n// Listen for \'error\' to handle failures\\nreadStream.on(\'error\', (err) => {\\n console.error(\'An error occurred:\', err.message);\\n});\\n\\n// Listen for \'close\' to perform cleanup\\nreadStream.on(\'close\', () => {\\n console.log(\'Stream has been closed.\');\\n});\\n\\n
Readable streams operate in two modes: flowing and paused, offering a balance between control and performance, giving developers fine-grained control over data consumption:
\\nMode | \\nBehavior | \\nUse case | \\n
---|---|---|
Flowing | \\nData is read as fast as possible, emitting data events | \\nContinuous processing | \\n
Paused | \\nData must be explicitly read using .read() | \\nPrecise control over data flow | \\n
Here’s an example of transitions between these two modes:
\\nconst fs = require(\\"fs\\");\\n\\n// Create a readable stream from a file\\nconst readStream = fs.createReadStream(\\"input.txt\\", { encoding: \\"utf-8\\" });\\n\\n// Starts in paused mode\\nconsole.log(readStream.isPaused()); // true\\n\\n// Switch to flowing mode\\nreadStream.on(\'data\', (chunk) => { \\n console.log(\'Auto-received:\', chunk); \\n});\\nconsole.log(readStream.isPaused()); // false\\n\\n// Return to paused mode\\nreadStream.pause(); \\n\\n// Manually pull data in paused mode\\nreadStream.on(\'readable\', () => {\\n let chunk;\\n while ((chunk = readStream.read()) !== null) {\\n console.log(\'Manually read:\', chunk);\\n }\\n});\\n\\n// Switch back to flowing mode\\nreadStream.resume();\\n\\n
This dynamic switching between paused and flowing modes provides flexibility. Use paused mode when we need precise control over data consumption (e.g., batch operations), and flowing mode for continuous processing (e.g., live data feeds).
\\nRobust error handling is essential when working with Node.js streams because they can fail due to missing files, permission errors, network interruptions, or corrupted data.
\\nSince streams inherit from EventEmitter
, they emit an \'error\'
event when something goes wrong. Proper error handling involves listening to the \'error\'
event and implementing appropriate recovery strategies.
General steps for error handling in streams:
\\n\'error\'
events: Attach an event listener to the readable stream to catch errorsconst fs = require(\'fs\');\\nconst readableStream = fs.createReadStream(\'example.txt\', \'utf-8\');\\n\\n// Listen for \'error\' events\\nreadableStream.on(\'error\', (err) => {\\n console.error(\'Stream error:\', err.message);\\n\\n // Clean up resources\\n readableStream.destroy(); // Close the stream and release resources\\n});\\n\\n// Optionally, pass an error to destroy() to emit an \'error\' event\\n// readableStream.destroy(new Error(\'Custom error message\'));\\n
destroy()
for cleanupThe destroy()
method is the recommended approach to close a stream and release its resources. It ensures the stream is immediately closed and underlying resources are being released.
You can optionally pass an error to destroy()
:
readableStream.destroy(new Error(\'Stream terminated due to an issue.\'));\\n\\n
This will subsequently emit an \'error\'
event on the stream, which can be useful for resource cleanup and signaling an unexpected termination.
In real-world applications, transient issues like network glitches or temporary file locks can cause stream errors. Instead of failing immediately, implementing a retry mechanism can help recover gracefully. Below is an example of how to add retry logic to a readable stream:
\\nconst fs = require(\'fs\');\\n\\nfunction createReadStreamWithRetry(filePath, retries = 3) {\\n let attempts = 0;\\n\\n function attemptRead() {\\n const readableStream = fs.createReadStream(filePath, \'utf8\');\\n\\n // Handle data chunks\\n readableStream.on(\'data\', (chunk) => {\\n console.log(`Received chunk: ${chunk}`);\\n });\\n\\n // Handle successful completion\\n readableStream.on(\'end\', () => {\\n console.log(\'File reading completed successfully.\');\\n });\\n\\n // Handle errors\\n readableStream.on(\'error\', (err) => {\\n attempts++;\\n console.error(`Attempt ${attempts} failed:`, err.message);\\n\\n if (attempts < retries) {\\n console.log(`Retrying... (${retries - attempts} attempts left)`);\\n attemptRead(); // Retry reading the file\\n } else {\\n console.error(\'Max retries reached. Giving up.\');\\n readableStream.destroy(); // Close the stream and release resources\\n }\\n });\\n }\\n\\n attemptRead(); // Start the first attempt\\n}\\n\\n// Usage\\ncreateReadStreamWithRetry(\'./example.txt\', 3); // File exists\\ncreateReadStreamWithRetry(\'./fileNotExists.txt\', 3); // File does not exist\\n\\n
The above function createReadStreamWithRetry
reads a file using a Node.js readable stream and incorporates a retry mechanism to handle potential errors during file access. If an error occurs, it retries reading the file a specified number of times before closing the stream.
By implementing a retry mechanism, we can make our application more reliable and stable.
\\nStreams aren’t just for handling large data flows, they’re also a way to create modular, reusable code. Think of them as LEGO bricks for data workflow: small components that snap together to create powerful pipelines. Each stream handles a single responsibility, making our code easier to debug, test, and extend.
\\nHere is an example that reads a file, transforms its content, compresses it, and writes the result — all in a memory-efficient stream:
\\nconst fs = require(\'fs\');\\nconst zlib = require(\'zlib\');\\nconst { Transform } = require(\'stream\');\\n\\n// 1. Create stream components\\nconst readStream = fs.createReadStream(\'input.txt\'); // Source: Read file\\nconst writeStream = fs.createWriteStream(\'output.txt.gz\');// Destination: Write compressed file\\n\\n// 2. Transform stream: Convert text to uppercase\\nconst upperCaseTransform = new Transform({\\n transform(chunk, _, callback) {\\n this.push(chunk.toString().toUpperCase()); // Modify data\\n callback();\\n }\\n});\\n\\n// 3. Compression stream: Gzip the data\\nconst gzip = zlib.createGzip();\\n\\n// 4. Assemble the pipeline\\nreadStream\\n .pipe(upperCaseTransform) // Step 1: Transform text\\n .pipe(gzip) // Step 2: Compress data\\n .pipe(writeStream); // Step 3: Write output\\n\\n
This chain of stream operations, connected by pipes, showcases how simple, reusable components can be combined to build complex data processing pipelines. This is just a taste of what you can achieve with streams. The possibilities are endless.
\\n\\nWhen using chained .pipe()
calls, errors in intermediate streams (like gzip
or upperCaseTransform
) won’t propagate to the final destination stream’s error handler. This can lead to uncaught exceptions, resource leaks, and application crashes. Let’s explore the problem and solutions in detail.
Here’s an example of a flawed implementation that misses intermediate errors:
\\nconst fs = require(\'fs\');\\nconst zlib = require(\'zlib\');\\nconst { Transform } = require(\\"stream\\");\\n\\nconst readStream = fs.createReadStream(\'input.txt\');\\nconst destination = fs.createWriteStream(\'output.txt.gz\');\\nconst gzip = zlib.createGzip();\\nconst upperCaseTransform = new Transform({\\n transform(chunk, encoding, callback) {\\n this.push(chunk.toString().toUpperCase());\\n callback();\\n },\\n});\\n\\n// Flawed implementation - misses intermediate errors\\nreadStream\\n .pipe(upperCaseTransform)\\n .pipe(gzip)\\n .pipe(destination)\\n .on(\'error\', (err) => { // Only catches destination errors\\n console.error(\'Pipeline failed:\', err);\\n destination.close();\\n })\\n .on(\'finish\', () => {\\n console.log(\'Pipeline succeeded!\');\\n });\\n\\n
This approach fails because errors in intermediate streams like upperCaseTransform
or gzip
won’t propagate to the final .on(\'error\')
handler. The unhandled errors could crash the entire Node.js process. Furthermore, resources like file descriptors or memory buffers might not be properly released without explicit error handling.
To fix the issue, we can attach individual error handlers to every stream:
\\n// Proper error handling\\nreadStream\\n .on(\'error\', (err) => {\\n console.error(\'Read error:\', err);\\n readStream.close();\\n })\\n .pipe(upperCaseTransform)\\n .on(\'error\', (err) => {\\n console.error(\'Transform error:\', err);\\n upperCaseTransform.destroy();\\n })\\n .pipe(gzip)\\n .on(\'error\', (err) => {\\n console.error(\'Gzip error:\', err);\\n gzip.destroy();\\n })\\n .pipe(destination)\\n .on(\'error\', (err) => {\\n console.error(\'Write error:\', err);\\n destination.close();\\n })\\n .on(\'finish\', () => {\\n console.log(\'Pipeline succeeded!\');\\n });\\n\\n
The above code will handle errors for each stream, but the repetitive error handlers are not ideal.
\\nA cleaner approach is to use the pipeline
method. It automatically propagates errors from any stream to a single error handler and ensures proper cleanup:
const fs = require(\'fs\');\\nconst zlib = require(\'zlib\');\\nconst { pipeline } = require(\'stream\');\\n\\n// 1. Create stream components\\nconst source = fs.createReadStream(\'input.txt\');\\nconst gzip = zlib.createGzip();\\nconst destination = fs.createWriteStream(\'output.txt.gz\');\\n\\n// 2. Connect streams using pipeline\\npipeline(\\n source, // Read from file\\n gzip, // Compress data\\n destination, // Write to archive\\n (err) => { // Unified error handler\\n if (err) {\\n console.error(\'Pipeline failed:\', err);\\n // Optional: Add retry logic here\\n } else {\\n console.log(\'Compression successful!\');\\n }\\n }\\n);\\n\\n
In the above example, errors in any stream (source
, gzip
, or destination
) are passed to the error-handling callback function. We ensure streams are closed even on failure, and avoid repeated error handlers.
Node.js readable streams are more than just a tool — they’re a core pattern for building efficient, scalable applications.
\\nIn this guide, we explored how readable streams process data in small chunks, manage data flow with paused/flowing modes, handle errors, and ensure resource cleanup. We also discussed chaining, transforming, and piping streams like modular components. Whether parsing terabytes of logs or streaming live sensor data, readable streams provide an efficient way to handle data.
\\nThe code snippets in the article can be found here. For more details and best practices, refer to the Node.js Stream API documentation.
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nWhen working with JavaScript, you may wonder whether it has a built-in dictionary type similar to Python. The short answer is no — JavaScript does not have a dedicated dictionary structure. However, it provides two powerful alternatives for handling key-value pairs: Objects and Maps.
\\nObjects have long been JavaScript’s primary way of storing structured data, while Maps were introduced in ES6 to offer more flexibility and efficiency for specific use cases. Understanding when to use an Object versus a Map is crucial for writing clean, optimized, and maintainable JavaScript code.
\\nIn this guide, we’ll break down how to use Objects and Maps, explore their differences, and help you choose the best one for your needs.
\\nAn object is a JavaScript data type used to store key-value data. If we need to store a user’s information, we can use an object:
\\nconst userInfo = { name: \'John Doe\', age: 25, address: \'Boulevard avenue\' };\\n
Without objects or a key-value data type, we’d have to store each datum of John Doe separately, which would make the app tedious to maintain. From the code sample, you’ll notice we were able to store both a string and a number in the same variable. This is why objects (and key-value pairs in general) are quite important and useful.
\\nYou can store any data type in a JavaScript object, even a function or other objects:
\\nconst userInfo = {\\n name: \\"John Doe\\",\\n age: 25,\\n address: \\"Boulevard avenue\\",\\n printName: function () {\\n return this.name;\\n },\\n favoriteColors: [\\"Red\\", \\"Blue\\"],\\n};\\n
The examples we’ve seen so far are object literals:
\\nconst userInfo = { name: \'John Doe\', age: 25 };\\n
JavaScript provides an Object
constructor that can be used to manage objects:
const userInfo = new Object();\\nuserInfo.name = \'John Doe\'; \\nuserInfo.age = 25;\\n
The Object
constructor is first called before values are assigned to keys. This is useful when creating an object from an existing object:
const userAppInfo = { lastLoggedIn: \\"2024-12-10 19:54:23\\", todos: 10 } \\nconst userInfo = new Object();\\nuserInfo.name = \'John Doe\'; \\nuserInfo.age = 25; \\n\\nfor (let key in userAppInfo) { \\n userInfo[key] = userAppInfo[key];\\n}\\nconsole.log(userInfo); // { name: \'John Doe\', age: 25, lastLoggedIn: \\"2024-12-10 19:54:23\\", todos: 10 }\\n
Don’t worry if you’re not sure how we accessed the object values; we‘ll tackle that later. In the meantime, just keep in mind that an object can be can be “copied” from another. There are easier ways of doing this that we’ll go over in a bit.
\\nconst userAppInfo = { lastLoggedIn: \\"2024-12-10 19:54:23\\", todos: 10 }\\nconst userInfo = Object.create(userAppInfo);\\nuserInfo.name = \'John Doe\'; \\nuserInfo.age = 25; \\n\\nconsole.log(userInfo); // { name: \'John Doe\', age: 25 }\\nconsole.log(userInfo.__proto__) // { lastLoggedIn: \\"2024-12-10 19:54:23\\", todos: 10 }\\n
With Object.create()
, we can pass a custom prototype like we just did above. By default, all objects inherit from Object.prototype
. In this case, we’ve overridden the default prototype of userInfo
with a custom one. You can always access the properties of the prototype in the created object:
const userAppInfo = {\\n lastLoggedIn: \\"2024-12-10 19:54:23\\",\\n todos: 10,\\n convertLastLoggedInToDate: function () {\\n return new Date(this.lastLoggedIn);\\n },\\n};\\n\\nconst userInfo = Object.create(userAppInfo);\\nuserInfo.name = \\"John Doe\\";\\nuserInfo.age = 25;\\nconsole.log(userInfo.convertLastLoggedInToDate()); // 2024-12-10T18:54:23.000Z\\n
If you pass null
into Object.create
, it doesn’t come with any prototype. This is called a null-prototype object.
Object.assign
is another way of copying an object — or, rather, creating another object from an existing one:
const userAppInfo = { lastLoggedIn: \\"2024-12-10 19:54:23\\", todos: 10 };\\nconst userInfo = Object.assign({ name: \'John Doe\', age: 25 }, userAppInfo);\\nconsole.log(userInfo) // { name: \'John Doe\', age: 25, lastLoggedIn: \'2024-12-10 19:54:23\', todos: 10 }\\n
The userAppInfo
object in this case is the source object while { name: \'John Doe\', age: 25 }
is the target object. Similar properties/keys in both the target and source objects are overwritten by the source object:
const userAppInfo = { lastLoggedIn: \\"2024-12-10 19:54:23\\", todos: 10 };\\nconst userInfo = Object.assign({ name: \'John Doe\', age: 25, todos: 9 }, userAppInfo);\\nconsole.log(userInfo) // { name: \'John Doe\', age: 25, todos: 10, lastLoggedIn: \'2024-12-10 19:54:23\' }\\n
The spread operator is the most common way of creating a new object from an existing one:
\\nconst userAppInfo = { lastLoggedIn: \\"2024-12-10 19:54:23\\", todos: 10 };\\nconst userInfo = { ...userAppInfo, name: \'John Doe\', age: 25 };\\nconsole.log(userInfo) // { lastLoggedIn: \\"2024-12-10 19:54:23\\", todos: 10, name: \\"John Doe\\", age: 25 }\\n
Similar keys are overwritten by the latest item to the object:
\\nconst userAppInfo = { lastLoggedIn: \\"2024-12-10 19:54:23\\", todos: 10 };\\nconst userInfo = { ...userAppInfo, name: \'John Doe\', age: 25, todos: 9 };\\nconsole.log(userInfo) // { lastLoggedIn: \\"2024-12-10 19:54:23\\", todos: 9, name: \\"John Doe\\", age: 25 }\\n
In this case, the new todos
will overwrite that of the userAppInfo
. We can simply turn that around like so:
const userAppInfo = { lastLoggedIn: \\"2024-12-10 19:54:23\\", todos: 10 };\\nconst userInfo = { name: \'John Doe\', age: 25, todos: 9, ...userAppInfo };\\nconsole.log(userInfo) // { name: \\"John Doe\\", age: 25, lastLoggedIn: \\"2024-12-10 19:54:23\\", todos: 10, }\\n
The examples show how objects are accessed as a group:
\\nconst userInfo = { name: \'John Doe\', age: 25 };\\nconsole.log(userInfo) // { name: \'John Doe\', age: 25 }\\n
There is no special order in which it will be logged; it is simply based on the order of the properties in the object.
\\nThough items in an object may contain related pieces of data, you’d most often need them individually:
\\nconst userInfo = { name: \'John Doe\', age: 25 };\\nconsole.log(userInfo.name) // \'John Doe\'\\nconsole.log(userInfo.createdAt) // undefined\\n
There are cases where the key cannot be used like we did above. For example:
\\nconst userInfo = { name: \'John Doe\', age: 25, \'date-of-birth\': 2000 }\\nconsole.log(userInfo.date-of-birth) // will throw an error\\n
Instead, we use the square brackets with the key as a string:
\\nconsole.log(userInfo[\'date-of-birth\']) // 2000\\n
Another use case for square brackets is for dynamic keys.
\\nThere are times when you need to perform actions on each item. In this case, you’d need to go through each item. That’s where loops come in:
\\nconst userInfo = { name: \'John Doe\', age: 25 };\\n\\nfor (let key in userInfo) {\\n console.log(userInfo[key]);\\n}\\n// John Doe\\n// 25\\n
Unlike arrays and maps in JavaScript, there is no default way of getting the size of an object. We can do that with the for…in
loop:
const userInfo = { name: \\"John Doe\\", age: 25, \\"date-of-birth\\": 2000 };\\n\\nconst getSizeOfObject = (obj) => {\\n let length = 0;\\n for (let key in obj) {\\n length++;\\n }\\n return length;\\n};\\n\\nconsole.log(getSizeOfObject(userInfo)); // 3\\n
With the for…in
loop, we can access both the keys and values of an object separately:
const userInfo = { name: \'John Doe\', age: 25 };\\n\\nconst keys = [];\\nconst values = []\\nfor (let key in userInfo) {\\n keys.push(key);\\n values.push(userInfo[key]);\\n}\\n\\nconsole.log(keys) // [\'name\', \'age\']\\nconsole.log(values) // [\'John Doe\', 25];\\n
JavaScript provides an easier way to access the keys and values of an object separately:
\\nconst userInfo = { name: \'John Doe\', age: 25 };\\nconst keys = Object.keys(userInfo);\\nconst values = Object.values(userInfo);\\n\\nconsole.log(keys) // [\'name\', \'age\']\\nconsole.log(values) // [\'John Doe\', 25];\\n
Created objects are easy to update. You can update an existing property:
\\nconst userInfo = { name: \'John Doe\', age: 25, todos: 9 };\\n\\nuserInfo.todos = 11;\\nconsole.log(userInfo) // { name: \'John Doe\', age: 25, todos: 11 }\\n
Also, new properties can be added to the objects:
\\nconst userInfo = { name: \'John Doe\', age: 25, todos: 9 };\\n\\nuserInfo.lastLoggedIn = \\"2024-12-10 19:54:23\\";\\nconsole.log(userInfo) // { name: \'John Doe\', age: 25, todos: 9, lastLoggedIn: \\"2024-12-10 19:54:23\\" }\\n
In JavaScript, it is possible to either delete items of an object or the object entirely:
\\nconst userInfo = { name: \'John Doe\', age: 25, todos: 9 };\\ndelete userInfo.todos;\\n\\nconsole.log(userInfo.todos) // undefined\\n
delete
is used to remove a property from an object. To delete an entire object, we’d have to pass null
or undefined
as the value:
let userInfo = { name: \'John Doe\', age: 25, todos: 9 };\\n\\nuserInfo = undefined;\\nconsole.log(userInfo) // undefined\\n
Almost all objects inherit the Object.prototype except a custom prototype is passed. Passing null
or undefined
as a prototype for an object doesn’t throw an error. Instead, it simply has no prototype whatsoever:
const userInfo = Object.create(null);\\nuserInfo.name = \'John Doe\';\\nuserInfo.age = 25;\\nconsole.log(userInfo.__proto__); // undefined\\n
When debugging, this can be confusing as the common methods and properties from the Object.prototype would not be available, for example converting an object to a string:
\\nconst userAppInfo = Object.create({ lastLoggedIn: \\"2024-12-10 19:54:23\\", todos: 10 });\\nuserAppInfo.name = \'John Doe\';\\nuserAppInfo.age = 25;\\nconsole.log(`${userAppInfo}`) // [object Object]\\n\\nconst userInfo = Object.create(null);\\nuserInfo.name = \'John Doe\';\\nuserInfo.age = 25;\\nconsole.log(`${userInfo}`); // TypeError: Cannot convert object to primitive value\\n
You can also set null
on object literals using the __proto__
key:
const userInfo = { name: \'John Doe\', age: 25, __proto__: null } \\nconsole.log(`${userInfo}`); // TypeError: Cannot convert object to primitive value\\n
One good use of setting null
as your prototype is that it is immune to prototype pollution attacks. If there is a malicious script added to Object.prototype, all of your objects inheriting that prototype will have access to that script except null-prototype objects.
JavaScript maps provide a unique way to store data in key-value pairs. Each item is unique and remembers the order of insertion. One other edge that Maps have over objects is that any data type can be used as both the key or value:
\\nconst userInfo = new Map();\\n\\nconst ageFunc = () => {\\n return \\"age\\";\\n};\\n\\nuserInfo.set(\\"name\\", \\"John Doe\\");\\nuserInfo.set(ageFunc(), 25);\\n\\nconsole.log(userInfo.get(ageFunc())); // 25\\n
Unlike objects, creating maps is limited to the Map
constructor. But it can be also created from objects:
const userInfo = { name: \'John Doe\', age: 25 };\\nconst userMapInfo = new Map(Object.entries(userInfo));\\nconsole.log(userMapInfo) // Map { \'name\' => \'John Doe\', \'age\' => 25 }\\n
The set
method as we’ve seen earlier is used to assign properties and values to a map:
const userInfo = new Map();\\n\\nuserInfo.set(\\"name\\", \\"John Doe\\");\\nuserInfo.set(\\"age\\", 25);\\nuserInfo.set(\\"name\\", \\"Jonathan Doe\\");\\nconsole.log(userInfo); // Map { \'name\' => \'Jonathan Doe\', \'age\' => 25 }\\n
As seen in the example above, duplicate keys are overwritten by the latest addition to the map. So when you think of representing key-value pairs where each key has to be unique, then Map
is what you need.
We use the get
method to access an item in Map:
const userInfo = new Map();\\n\\nuserInfo.set(\\"name\\", \\"John Doe\\");\\nuserInfo.set(\\"age\\", 25);\\n\\nconst userName = userInfo.get(\\"name\\");\\nconsole.log(userName); // John Doe\\n
The size
property gives us access to the size of the map; we don’t need to create a custom function for that:
const userInfo = new Map();\\n\\nuserInfo.set(\\"name\\", \\"John Doe\\");\\nuserInfo.set(\\"age\\", 25);\\n\\nconsole.log(userInfo.size); // 2\\n
The for…of
loop can be used to loop through each item of a map. Each iteration returns an array of the key
and value
:
const userInfo = new Map();\\nuserInfo.set(\\"name\\", \\"John Doe\\");\\nuserInfo.set(\\"age\\", 25);\\n\\nconst keys = [];\\nconst values = [];\\nfor (const [key, value] of userInfo) {\\n keys.push(key);\\n values.push(value);\\n}\\n\\nconsole.log(keys); // [\'name\', \'age\']\\nconsole.log(values); // [\'John Doe\', 25]\\n
Alternatively, we can use the forEach
method to loop through each item of a map:
const userInfo = new Map();\\nuserInfo.set(\\"name\\", \\"John Doe\\");\\nuserInfo.set(\\"age\\", 25);\\n\\nconst keys = [];\\nconst values = [];\\n\\nuserInfo.forEach((value, key) => {\\n keys.push(key);\\n values.push(value);\\n});\\n\\nconsole.log(keys); // [\'name\', \'age\']\\nconsole.log(values); // [\'John Doe\', 25]\\n
The order of each iteration will always be based on the order in which they were inserted.
\\nThough, as we’ve seen, you can access the keys and values of a map using loops, JavaScript does provide a custom way of accessing the keys and values:
\\nconst userInfo = new Map();\\nuserInfo.set(\\"name\\", \\"John Doe\\");\\nuserInfo.set(\\"age\\", 25);\\n\\nconst keysIterator = userInfo.keys();\\nconsole.log(keysIterator) // MapIterator { \'name\', \'age\' }\\n\\nconst valuesIterator = userInfo.values();\\nconsole.log(valuesIterator) // MapIterator { \'John Doe\', 25 }\\n
You can access each item of both the keys and values by iterating on them:
\\nconst values = [];\\nconst keys = [];\\n\\nfor (const value of valuesIterator) {\\n values.push(value);\\n}\\nconsole.log(values) // [\'John Doe\', 25]\\n\\nfor (const key of keysIterator) {\\n keys.push(key);\\n}\\nconsole.log(keys) // [\'name\', \'age\']\\n
Or, you can simply access them with the next()
function:
console.log(keysIterator.next().value) // \'name\'\\nconsole.log(keysIterator.next().value) // \'age\'\\n\\nconsole.log(valuesIterator.next().value) // \'John Doe\'\\nconsole.log(valuesIterator.next().value) // 25\\n
Because keys in maps can only occur once, we can use the set
method for updating existing items in the map:
const userInfo = { name: \'John Doe\', age: 25 };\\nconst userMapInfo = new Map(Object.entries(userInfo));\\nuserMapInfo.set(\'name\', \'Jonathan Doe\');\\n\\nconsole.log(userMapInfo) // Map { \'name\' => \'Jonathan Doe\', \'age\' => 25 }\\n\\n
Also, as we’ve seen, the set
method is used for adding new items:
const userInfo = { name: \'John Doe\', age: 25 };\\nconst userMapInfo = new Map(Object.entries(userInfo));\\nuserMapInfo.set(\'name\', \'Jonathan Doe\');\\nuserMapInfo.set(\'todos\', 12);\\n\\nconsole.log(userMapInfo) // Map { \'name\' => \'Jonathan Doe\', \'age\' => 25, \'todos\' => 12 }\\n
We use the delete
method to delete an item of a map with its key:
const userInfo = { name: \'John Doe\', age: 25, todos: 12 };\\nconst userMapInfo = new Map(Object.entries(userInfo));\\n\\nuserMapInfo.delete(\'todos\');\\n\\nconsole.log(userMapInfo) // Map { \'name\' => \'Jonathan Doe\', \'age\' => 25 }\\nconsole.log(userMapInfo.get(\'todos\')); // undefined\\n\\n
JavaScript also offers a clear
method to clear out all properties in a map:
const userInfo = { name: \'John Doe\', age: 25, todos: 12 };\\nconst userMapInfo = new Map(Object.entries(userInfo));\\nuserMapInfo.clear();\\n\\nconsole.log(userMapInfo) // Map {}\\n\\n
Feature | \\nObjects | \\nMaps | \\n
---|---|---|
Key types | \\nStrings & Symbols only | \\nAny data type (e.g., objects, functions, numbers) | \\n
Iteration order | \\nNot guaranteed (keys follow insertion order in modern browsers, but older implementations differ) | \\nGuaranteed (keys maintain insertion order) | \\n
Performance | \\nFaster for small datasets | \\nOptimized for frequent additions and deletions | \\n
Size retrieval | \\nRequires Object.keys(obj).length | \\nUses map.size property | \\n
Prototype inheritance | \\nObjects inherit from Object.prototype , which may cause unintended behavior | \\nMaps do not inherit properties from Object.prototype | \\n
Key existence check | \\nUses \\"key\\" in obj or obj.hasOwnProperty(\\"key\\") | \\nUses map.has(key) | \\n
Serialization | \\nSupports JSON.stringify() | \\nNeeds manual conversion to an Object first | \\n
Use case | \\nBest for structured, static data (e.g., API responses, user profiles) | \\nBest for dynamic, frequently updated data (e.g., caching, lookup tables) | \\n
When it comes to choosing the best dictionary-like structure for your use case, the shape of the data and how you want it to be handled is paramount.
\\nThe shape of your data determines whether an Object or a Map is more appropriate.
\\n\\nHow you manage data (insertion, lookup, deletion) also plays a key role.
\\nUse objects when you need simple, structured data with string keys. Use maps when you need efficient insertions, deletions and complex keys.
\\nWhile JavaScript does not have a dedicated dictionary type, Objects and Maps provide robust solutions for handling key-value pairs. The choice between them depends on your specific needs:
\\nUnderstanding the strengths and weaknesses of Objects and Maps in JavaScropt will help you write more efficient and maintainable JavaScript code. Whether you’re storing user preferences, managing application state, or implementing a caching mechanism, choosing the right structure will improve your application’s performance and readability.
\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\ngrid-template-columns
?\\n grid-template-columns
with grid-auto-flow
\\n The grid-template-columns
property is just a small part of the CSS Grid Layout specification. To understand this property in particular, you first need to have an understanding of what the CSS Grid is.
To bring you up to speed, the Grid layout essentially defines a two-dimensional grid-based layout system that lets you design pages or templates using rows and columns, instead of using older techniques like float: right;
.
In this article, we’ll explore grid-template-columns
in CSS, understand what it is, and how to best use it. Let’s get started!
grid-template-columns
?Simply put, grid-template-columns
is a CSS property that defines the number and size of columns in a grid layout. This property can accept multiple values, separated by spaces, with each value defining the width of its respective column. The values can be fixed lengths (e.g., 100px
), percentages (e.g., 20%
), fractions (e.g., 1fr
), or content-based using values and functions such as auto
, minmax
, or repeat
.
Here is a basic demo of how to use grid-template-columns
:
grid-template-columns: auto auto auto;\\ngrid-template-columns: auto auto;\\ngrid-template-columns: 20% 20% 20% 20%;\\n\\n
The three auto
separated values represent three columns with the same width. The same applies to the following two auto
values. The four 20%
values tell us that the columns will have a width that is 20 percent of the parent element.
The syntax is pretty simple, but there are more values than just percentages and auto
. Consider the following code:
grid-template-columns: none|auto|max-content|min-content|length|initial|inherit;\\n\\n
Each value separated by the pipe is a potential value you can use for the template-columns
property. Each has its purpose, which we’ll go over in the next section.
The grid-template-columns
property is also animatable. If there’s a need for animations and transitions, the column values can be changed gradually to create a seamless transition in the grid layout:
<!DOCTYPE html>\\n<html>\\n <head>\\n <style>\\n .grid-container {\\n display: grid;\\n grid-template-columns: auto auto auto auto;\\n grid-gap: 10px;\\n background-color: black;\\n padding: 10px;\\n animation: mymove 5s infinite;\\n border-radius: 2vw;\\n }\\n\\n .grid-container > div {\\n background-color: white;\\n text-align: center;\\n padding: 20px 0;\\n font-size: 30px;\\n border-radius: 1.5vw;\\n }\\n\\n @keyframes mymove {\\n 20% {grid-template-columns: auto}\\n 40% {grid-template-columns: auto auto}\\n 50% {grid-template-columns: auto auto auto;}\\n 60% {grid-template-columns: auto auto}\\n 80% {grid-template-columns: auto}\\n }\\n </style>\\n </head>\\n <body>\\n <h1>Animation of the grid-template-columns Property</h1>\\n <p>The animation will change the number of columns from 1 to 3 then back to 1 and finally the original 4. on repeat </p>\\n <div class=\\"grid-container\\">\\n <div class=\\"item1\\">1</div>\\n <div class=\\"item2\\">2</div>\\n <div class=\\"item3\\">3</div> \\n <div class=\\"item4\\">4</div>\\n <div class=\\"item5\\">5</div>\\n <div class=\\"item6\\">6</div>\\n <div class=\\"item7\\">7</div>\\n <div class=\\"item8\\">8</div>\\n </div>\\n </body>\\n</html>\\n\\n
This is just a simple demo of how you can use this property with animations and transitions:
\\n\\n
Here’s a quick summary of the values we’ll cover in this article:
\\nValue | \\nDescription | \\n
---|---|
none | \\nDefault value. Creates implicit columns if needed | \\n
auto | \\nAutomatically sets column size based on content and available space | \\n
min-content | \\nColumns are sized to fit the smallest content in the column | \\n
max-content | \\nColumns are sized to fit the largest content in the column | \\n
minmax() | \\nThe column size is constrained between a minimum and maximum value | \\n
fit-content() | \\nAdjusts columns to fit its content but won’t exceed the specified size | \\n
percentage | \\nDefines the column size as a percentage of the grid container’s width | \\n
repeat() | \\nCreates a pattern of columns | \\n
length | \\nSets the column size using any valid length value | \\n
subgrid | \\nThe column inherits its sizing from the parent grid container | \\n
initial | \\nResets the property to its default behavior | \\n
inherit | \\nInherits grid-template-columns from the parent element | \\n
Before diving into the details, it’s worth noting that grid-template-columns
, like most CSS properties, accept global values:
grid-template-columns
: inherit
grid-template-columns
: initial
grid-template-columns
: revert
grid-template-columns
: unset
These values generally manage inheritance or browser-specific defaults. For example, the initial
value resets the property to its default value (none
). They are most useful when you need to reset or override inherited grid properties.
Now, let’s look at the syntax with values you’re more likely to use when working with grid-template-columns
:
grid-template-columns: none|auto|max-content|min-content|length|flex|percentage|repeat();\\n\\n
These values fall into two main categories: <track-list>
and <auto-track-list>
.
Track-list values
\\n<track-list>
values are non-negative explicit values (i.e., directly specified values) that define exactly how many columns you want and their sizes.
Here are the available track-list
values:
none
The none value is the default for grid-template-columns
. It means no explicit grid tracks (columns) are defined, so the browser automatically generates implicit columns as needed.
Even though none
allows implicit (auto-generated) columns, it’s not considered a <track-list>
or <auto-track-list>
value. Instead, it’s a keyword value that removes explicit columns while still permitting implicit grid behavior.
This is similar to how none
works in other CSS properties like border: none;
or outline: none;
. It’s a deliberate keyword choice that means “remove this property’s effect” rather than “set this property to zero or empty.”
percentage
The percentage
value defines a track size relative to the grid container’s inline size (width in horizontal writing modes). Each percentage directly represents a portion of the container’s total width.
For example:
\\n/ CSS\\n.grid-container {\\n display: grid;\\n grid-template-columns: 20% 30% 50%;\\n gap: 10px;\\n background-color: black;\\n padding: 10px ;\\n border-radius: 2vw;\\n}\\n\\n// HTML\\n<div class=\\"grid-container\\">\\n <div class=\\"grid-item\\">Item 1</div>\\n <div class=\\"grid-item\\">Item 2</div>\\n <div class=\\"grid-item\\">Item 3</div>\\n</div>\\n\\n
In this example, the first item Item 1
will occupy 20%
of the grid container’s width, Item 2
will take up 30%
, and Item 3
will use the remaining 50%
.
However, if you look at the result of our little example, you’ll notice that the third item in the container is overflowing:
\\nThis example demonstrates a common issue with percentage values in which they are calculated based on container width without accounting for gaps. So what’s actually happening is:
\\n20%
of container width (first column)10px
gap30%
of container width (second column)10px
gap50%
of container width (third column)The percentage values add up to 100% of the width plus 20px
from the gaps, which causes the overflow. To fix this, either adjust the percentage values to account for the gaps or use fr
units instead.
length
Length values don’t refer to the literal word “length,” but rather any valid, non-negative CSS length that defines a track’s width. These can include:
\\npx
, cm
, mm
vw
, vh
em
, rem
fr
body {\\nbackground-color: white;\\npadding: 10vw;\\n}\\n#testdiv{\\n height: auto; \\n display: grid; \\n border-radius: 1vw; \\n gap: 1vw; \\n background-color: black; \\n padding: 1vw; \\n grid-template-columns: 10vw 25vw 40vw;\\n}\\n#test-div div {\\n background-color: white; \\n text-align: center; \\n padding: 20px 0; \\n font-size: 30px; \\n border-radius: 0.5vw;\\n }\\n
The code above will give the first column a width of 10vw
, the second a width of 25vw
, and the third a width of 40vw
. The output is shown below:
\\n
flex
Flex values allow you to create flexible grid columns using fractional units or fr
units. It specifies the column’s size as a fraction of the remaining space in the grid container after accounting for all fixed-size tracks and gaps. They provide flexible sizing that automatically adjusts to the container’s width:
grid-template-columns: 1fr 2fr 1fr;\\n\\n
This creates three columns where the middle column gets twice the space of the others:
\\n\\n
Flex values are often mixed with fixed values, and it’s important to understand their behavior in such cases. The grid first allocates space for the fixed columns. Since gaps are also considered, it will allocate space for defined gaps too. Whatever space is left will be shared between the flex columns according to the number of fractions they get.
\\n\\nFor example, let’s say the first column in the example above has a fixed width of 300px
, and the grid container’s width is 600px
:
.grid-container {\\n display: grid;\\n width: 600px;\\n grid-template-columns: 300px 2fr 1fr;\\n gap: 10px;\\n ...\\n}\\n\\n
In this example, the available space will be 300px
, which is what you get when you subtract 300px
from the grid container’s width of 600px
. The second column will get two fractions (2fr
) of the available space, while the third column will get one fraction (1fr
).
To calculate the width of 1fr
, we can use the following formula:
1fr = (Grid container width - Fixed column width) / Sum total of column count \\n = (600px - 300px) / 3\\n = 300px / 3\\n1fr of 600px = 100px\\n\\n
Therefore, 1fr
in this example is 100px
. This means the second column will get 200px
(2fr
), and the third column will get 100px (1fr
):
To include gaps in the calculation, simply subtract the gap size from the remaining space along with the fixed column widths. The formula will be as follows:
\\n1fr = (Grid container width - Gap - Fixed column width) / Sum total of column count \\n\\n
<auto-track-list values>
<auto-track-list>
values are implicit values set using keywords that automatically create a flexible number of columns that adapt or adjust to the column’s content or the container size. These values include:
auto
The auto
value is an auto-track-list keyword value that automatically sizes tracks based on available space and content. When multiple auto
tracks are specified, they share the available space equally after accounting for any fixed-width tracks.
In the code below, take note that the value of the grid-template-columns
property uses auto
three times. Therefore, you’ll have three columns of equal width:
body {\\n background-color: white;\\n padding: 10vw;\\n}\\n#test-div {\\n height: auto;\\n display: grid;\\n border-radius: 1vw;\\n gap: 1vw;\\n background-color: black;\\n padding: 1vw;\\n grid-template-columns: auto auto auto;\\n}\\n#test-div div {\\n background-color: white;\\n text-align: center;\\n padding: 20px 0;\\n font-size: 30px;\\n border-radius: 0.5vw;\\n}\\n\\n
The output:
\\nWith the auto
value, you can dive a little deeper. If you have another value for the property, say, grid-template-columns: auto 200px auto 250px;
, the UI will have four columns. The first one will have the same width as the third, while the second and fourth will have uniquely specified widths:
Now, this should tell you that auto
takes whatever space is available and divides it equally among any column with said value.
minmax(min, max)
This is a function value that defines a size range for specifying the minimum and maximum size of a grid column. The function takes two arguments: a minimum length value and a maximum length value:
\\ngrid-template-columns: minmax(min, max);\\n\\n
The min
and max
arguments basically define the size of a column on the grid, but instead of explicitly defining the column’s width, minmax
lets you flexibly size the column based on its content. In other words, you are essentially telling the grid that you don’t know how large or small the content of this column is going to be. You’re also saying that you don’t want the content to be smaller than the min
argument, and that it should be wider than the max
argument.
For example:
\\ngrid-template-columns: 1fr minmax(100px, 300px) 1fr;\\n\\n
In this example, there are three columns. The first and third columns will take up equal fractions (1fr
) of the available space. The middle column will be at least 100px
wide but can expand up to 300px
if there’s enough room:
min-content
This is a keyword value that defines the smallest possible size a column can be while still fitting its content without causing overflow. This applies to various content types, including text, images, and videos.
\\nFor example, if a column contains a long line of text, instead of setting the column’s width to accommodate the entire line (as its counterpart max-content
would), min-content
will wrap the text and set the column’s width to the width of the longest word in the content.
Take the following example:
\\n<div class=\\"grid-container\\">\\n <div class=\\"item1\\">1fr</div>\\n <div class=\\"item2\\">The art of working is to conceptualize the unknown aaand get rid of the fear it introduces</div>\\n <div class=\\"item3\\">1fr</div>\\n </div>\\n\\n
By default, the column will wrap the content based on the responsive value assigned to it. If the column has a fixed width, the content will either wrap or overflow, depending on the fixed width and the content’s size.
\\nHowever, if the column is given a minmax()
value, as in the previous example, the output will be as follows:
In this case, the content in the second column will stretch beyond 300px
, which is the maximum width specified by minmax()
. Hence, its content is wrapped.
If we use the min-content
keyword instead, the content would wrap further. The word “conceptualize” would be considered the min-content
because it’s the longest word in the text, and the column’s width would be set accordingly:
max-content
The max-content
value defines the ideal size required for a column to fit all of its content without any line breaks or wrapping. Essentially, it allows the column to expand to the maximum length of its content.
If we revisit the previous example and apply max-content
, the column will expand until its entire content fits on a single line. However, this will often cause an overflow in the grid container, as there may not be enough space to accommodate the column’s increased size. To demonstrate, we’ll shorten the content:
It’s important to understand that when multiple items are in the grid, every item within the column where max-content
is defined will behave similarly. However, the item with the longest content will determine the width of all other items in that column, not their individual content lengths.
For example, if we add six more items to the grid container in this example:
\\n<div class=\\"grid-container\\">\\n <div class=\\"item1\\">1fr</div>\\n <div class=\\"item2\\">The art of curiosity is to conceptualize...</div>\\n <div class=\\"item3\\">1fr</div>\\n <div class=\\"item3\\">1fr</div>\\n <div class=\\"item3\\">1fr</div>\\n <div class=\\"item3\\">1fr</div>\\n <div class=\\"item3\\">1fr</div>\\n <div class=\\"item3\\">1fr</div>\\n <div class=\\"item3\\">1fr</div>\\n </div>\\n\\n
The result will look something like this:
\\nThe second column, due to its longer content, determines the width of the fifth and eighth columns. This same principle applies to min-content
. However, using max-content
often leads to overflows, as demonstrated.
fit-content()
The fit-content()
function behaves similarly to max-content
, but it provides an additional constraint. It takes a single percentage or length value argument, which it uses to limit the column’s maximum size. This allows the column to adapt to the size of its content while preventing it from growing beyond a specified maximum:
// CSS\\n.grid-container{\\n ...\\n grid-template-columns: 1fr fit-content(500px);\\n}\\n\\n// HTML\\n<div class=\\"grid-container\\">\\n <div class=\\"item1\\">max-content size</div>\\n <div class=\\"item2\\">The art of curiosity is to conceptualize...</div> \\n</div>\\n\\n
In this example, both columns will adapt to their content. However, the first column will not expand beyond its max-content
size because its content’s maximum length does not exceed 500px
. The second column, on the other hand, will expand until it reaches a width of 500px
because its content is longer than that limit:
repeat()
repeat()
is a commonly used value for the grid-template-columns
property. It is a function that allows you to create repeating patterns of columns with the same size. This function takes two arguments: the repetition count, and the column size to repeat:
grid-template-columns: repeat( [ <positive-integer> | auto-fill | auto-fit ] , track size);\\n\\n
The repeat()
function makes creating repeated columns or rows convenient. Instead of defining a grid-template-columns
property with multiple identical column sizes, like so:
grid-template-columns: 1fr 1fr 1fr 1fr 1fr;\\n\\n
You can use the repeat()
function to specify the number of columns you need and the size you want them to have. In this case, four columns of size 1fr
:
grid-template-columns: repeat(4,1fr);\\n\\n
The output would be as expected: four columns of equal width, each taking up a fraction of the available space:
\\n\\n
auto-fill
and auto-fit
As indicated in the previous section, The repeat()
function not only takes integer values for the first argument (i.e. the repetition count), but also the auto-fill
or auto-fit
keywords. These keywords create as many columns as can fit into a row of a grid container.
While both keywords automatically fit and fill as many columns as possible into a grid row before wrapping to a new line, they behave slightly differently.
\\nThe auto-fill
keyword leaves any remaining space in the grid container after creating the necessary columns. It treats this space as a gap. This is because auto-fill
creates empty columns, even if there’s no content to populate them:
The auto-fit
keyword, on the other hand, adjusts the columns to fit the container size if there’s extra room. This eliminates gaps and makes the grid more responsive.
The example above, when using auto-fit, will look like this:
\\n[linename]
Named grid lines, also known as [linename]
, are not a grid-template-columns
value in themselves. Instead, they provide a way to assign custom names to the lines that define the boundaries of the grid column, instead of the usual numerical way of identifying them. This makes it easier to reference and position grid items within the grid layout, especially in complex designs.
Grid lines can be assigned names using square brackets. Here’s the basic syntax:
\\ngrid-template-columns: [line-name] track-size [line-name];\\n\\n
Consider the following example:
\\ngrid-template-columns: [main-start] 1fr [content-start] 1fr [content-end] 1fr [main-end];\\n\\n
This defines three columns with named lines at the start and end of each track.
\\nWith the [linename]
defined, you can position grid items more intuitively within the grid container instead of using numerical values:
.grid-item:nth-child(1) {\\n grid-column-start: main-start;\\n grid-column-end: content-start;\\n}\\n\\n.grid-item:nth-child(2) {\\n grid-column-start: content-start;\\n grid-column-end: content-end;\\n}\\n\\n.grid-item:nth-child(3) {\\n grid-column-start: content-end;\\n grid-column-end: main-end;\\n}\\n\\n.grid-item:nth-child(4) {\\n grid-column-start: main-start;\\n grid-column-end: main-end;\\n}\\n\\n
The result will be as you’d expect:
\\nYou can also assign multiple names to a grid line by separating them with spaces inside the square brackets:
\\ngrid-template-columns: [sidebar-end main-start] 1fr [content-start] 1fr [content-end] 1fr [main-end];\\n\\n
In this case, the grid line has two names: sidebar-end
and main-start
. You can refer to this line by either name when placing grid items.
Line names provide a powerful way of creating flexible and maintainable grid layouts. However, it’s important to note that they are <custom-ident>
keywords, a CSS data type identifier that represents any valid CSS name that you create, as long as it doesn’t conflict with any predefined CSS keywords.
For example, keywords such as span
, auto
, inherit
, initial
have specific meanings in CSS, and using them as custom identifiers could lead to unexpected behavior.
Understanding these values will give you leverage when creating dynamic, flexible, and responsive grid layouts. The auto-track-list
values are particularly suitable for such use cases. When combined with track-list
values such as fractional units and keywords like auto
, they make for a powerful and adaptable grid system.
grid-template-columns
with grid-auto-flow
The grid-template-columns
property is not a one-size-fits-all solution for creating fully responsive and flexible grid layouts. There are instances where you’ll need more granular control over column sizes and item placements than grid-template-columns
alone can provide.
In these cases, you might need to combine grid-template-columns
with other grid properties to achieve the desired layout. One such property is the grid-auto-flow
property that controls how auto-placed items are inserted into the grid. It determines the flow of items when they are not explicitly positioned using grid lines or areas.
Combining these properties means grid-template-columns
defines the columns and their sizes, while grid-auto-flow
determines how they are added to the grid, provided the items have not been explicitly placed.
grid-auto-flow
has four primary values:
row
— Places items in rows (default)column
— Places items in columnsrow dense
— Fills in gaps in rowscolumn dense
— Fills in gaps in columnsThe default item flow in a grid container is row
, meaning items are arranged horizontally in each row. When a row is full, the placement continues on the next row:
As you can see, the item placement is in rows and it breaks when there is no more space left in the first row and continues in the next row.
\\nWith the combination of the grid-auto-flow
property, we have control over the flow of the item placement. Instead of the default row
placement, we can use column
flow, which arranges the items in each column vertically. Where there is no space left in a column, the placement continues in the next column:
It is important to note that using the grid-auto-flow: column;
value requires you to add the grid-template-rows
property into the mix. This will help create more rows for the items to occupy in each column. Without defining grid-template-rows
, the items would stack horizontally within a single row, leading to an undesirable layout:
Combining these properties allows you to create complex and dynamic grid layouts with greater control over item placement.
\\nWorking with the grid-template-columns
property can be straightforward, but there are some common mistakes that developers often make. Here are a few of them along with tips on how to avoid these pitfalls.
repeat()
functionJust as the repeat()
function is the most commonly used value of the grid-template-columns
property, it also tends to be the most commonly misused.
One of the most common errors when using repeat()
with auto-fit
or auto-fill
is attempting to use flexible values like 1fr
for the column track size:
grid-template-columns: repeat(auto-fit, 1fr);\\n\\n
This is invalid because repeat()
with auto-fit
or auto-fill
requires an explicit column track size to accurately calculate the number of columns that can fit within the grid container. 1fr
is a flexible size, not an explicit one.
However, it’s acceptable to use an intrinsic value as the function’s second argument, as long as the initial argument is a positive number:
\\ngrid-template-columns: repeat(3, 1fr);\\n\\n
Some units are by default incompatible, and when mixed incorrectly can lead to invalid expressions that are most likely to break the layout. However, unit incompatibility isn’t limited to syntax errors. Even units that function together syntactically can be incompatible in terms of responsiveness and flexibility.
\\nThe most common incompatibility occurs when mixing fixed-length units (like px
, em
, rem
, etc.) with flexible units (fr
) without a clear strategy.
Consider the following example:
\\n.grid-container {\\n display: grid;\\n grid-template-columns: 100px 1fr 200px 1fr;\\n}\\n\\n
While this might seem valid, it can lead to unpredictable behavior. The browser will allocate 100px
and 200px
to the first and third columns, respectively. Then, it will divide the remaining space equally between the two 1fr
columns.
The issue arises because the fr
units are dependent on the space remaining after the pixel-based columns are placed. This can become unpredictable as the container size changes (e.g., on different screen sizes).
The best approach is generally to stick to either fixed units (if you need precise control) or flexible units (fr
) for the majority of your columns. If you need certain columns to have a minimum size but also expand, use minmax()
. This enhances responsive behavior significantly:
.grid-container {\\n display: grid;\\n grid-template-columns: minmax(100px, 1fr) 1fr minmax(200px, 1fr);\\n}\\n\\n
While CSS allows you to mix these units, it often results in unpredictable and difficult-to-maintain layouts. Therefore, it’s important to carefully consider your sizing strategy and use min-max ()
and auto
where applicable.
You might encounter unexpected extra space in your grid layout, even without explicitly defining one. This can stem from various reasons, but one common reason is using the auto-fill
keyword with the repeat()
function.
Remember that auto-fill
preserves any remaining space in the grid container after creating the necessary columns. If this is the case, you can rectify it by using auto-fit
instead, which stretches the columns to fill the container.
Another potential solution is to verify that the gap
property is not being unintentionally applied to the grid container. Regardless, explicitly setting gap: 0;
can be a proactive measure to prevent such issues and save debugging time.
The grid-template-columns
property has been around for quite some time. It was initially proposed as a candidate recommendation by the W3C in 2011. However, it gained full browser support in 2017 when all major browsers (Chrome, Firefox, Safari, Edge) adopted the standard CSS Grid specification.
Here’s a caniuse browser compatibility chart for the grid-template-columns
properties:
However, certain values, such as the [masonry]
keyword, which is currently in Editor’s Draft status, remain experimental and lack full browser support. it can only be enabled behind the layout.css.grid-template-masonry-value.enabled
flag in Firefox and the technology preview in Safari or using a polyfill:
As I said at the very beginning of this post, grid-template-columns
is just one aspect of the whole grid system. In this article, we learned how to manipulate column widths using the none
, auto
, max-content
, min-content
, and length
values.
If this gets you excited, then you can dive deep into everything that comes with the display: grid
line in your CSS. The possibilities are endless. Happy coding!
Would you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nSidebar
component\\n One of the best capabilities of modern Web apps is providing smooth and responsive user interactions. Some common examples of such interactions are switching between tabs, loading paginated data, and filtering or sorting dynamic content.
\\nReact makes these interactions easier with its built-in and third-party state management solutions such as the useState
Hook, the Context API, Redux, and more.
However, some cases demand that states persist in support of the app’s overall UX. In such scenarios, using client-side storage or databases doesn’t make much sense, as this persistence has more to do with the app’s universal usability than personalization.
\\nThis tutorial will explore handling such state changes with URL and search parameters. By the end, you’ll understand the importance of URL-based state management in React with its SEO, performance, and accessibility considerations.
\\nN.B., As a prerequisite, you should have a general idea of working with React, React Hooks, and TypeScript. The code examples in this guide use Tailwind CSS for UI styling, but we won’t focus on it too much. You can find the source code in this GitHub repo.
\\nWhen an app’s views depend on the URL for state changes, it is said to be managing states via URL. These changes in views range from small, interactive updates to huge ones that decide the core nature of a page.
\\nConsider the following example where a URL determines the data for a Google search page with a search query and a filter:
\\nhttps://www.google.com/search?q=hello&udm=7\\n\\n
The structure of the URL shared above contains the following elements:
\\n/search
)?
) in the URL (q=hello&udm=7
)q=hello
and udm=7
), separated by an ampersand sign (&
)With their data, these URL units communicate with a backend behind the scenes to populate the frontend with relevant information. Google follows the same URL pattern for all search pages, where only the search parameters change to filter a search or perform another search operation.
\\nSince the URL here contains all the triggers controlling the crucial information on the page, you can save or bookmark it and revisit it to access the expected data, which is a big plus from a UX perspective.
\\nOn the other hand, React apps do the exact opposite of the example above. By default, they can’t control the state from the URL unless we tell them to.
\\nManaging states with the URL should establish a bidirectional data flow between the application states and the URL. This is important to keep the states synchronized and fresh and avoid using stale data.
\\nHere’s what URL-based state management setup using React (with React Router DOM library) brings to the table:
\\nManaging states with URLs is largely a client-side concern, which makes client-side rendering the key focus of this article.
\\nIt’s worth noting that URL-based state management is not a replacement for traditional state management. These two work together to enable the utilization of URLs to persist views, making the overall app more accessible and user-friendly.
\\nBuilding a store-like utility is perfect for demonstrating the implementation of state management using URLs in React, as it poses some complex challenges, such as paging and filtering data.
\\n\\nLet’s create an app that uses a mock JSON API to fetch dummy product data in a paged fashion, as we might see on ecommerce apps. We’ll add more features later, such as filtering products on a categorical, price, and rating basis.
\\nSetting up a React app with TypeScript is fairly simple with pnpm and Vite:
\\npnpm create vite@latest url-based-react-state -- --template react\\n\\n
Pick TypeScript as the core language, and you are good to go.
\\nAfter cd-ing into the project directory and installing the required dependencies, you may add Tailwind CSS to the app or skip it if you are considering a different UI solution.
\\nCreating dedicated directories to organize things is a best practice at the start of any project. I’m considering separating the API logic from the components and will provide them with the data they need with some custom React Hooks.
\\nFollowing the same approach with the types, utilities, and configuration data, the final project folder looks something like the following:
\\nTo work with routes and the browser URL, we should install the React Router DOM package, a library built on top of the core React router for handling routing smoothly.
\\nAt this point, we should also install the TanStack Query to avoid the repeating usage of state boilerplate in our hooks and handle errors and data caching more efficiently:
\\npnpm add react-router-dom @tanstack/react-query\\n\\n
For the TanStack Query and routing to work properly, our App
component should be wrapped within QueryClientProvider
, which should be placed inside the BrowserRouter
component as shown below:
// src/Main.tsx\\nimport { BrowserRouter } from \\"react-router-dom\\";\\nimport { QueryClient, QueryClientProvider } from \\"@tanstack/react-query\\";\\n/* Other imports... */\\n\\nconst queryClient = new QueryClient();\\n\\ncreateRoot(document.getElementById(\\"root\\")!).render(\\n <StrictMode>\\n <BrowserRouter>\\n <QueryClientProvider client={queryClient}>\\n <App />\\n </QueryClientProvider>\\n </BrowserRouter>\\n </StrictMode>\\n);\\n\\n
We can now add routes to the App
component according to our requirements and hand them the components they are supposed to show:
// src/App.tsx\\n\\nexport default function App() {\\n return (\\n <>\\n <Navbar />\\n <Routes>\\n <Route path=\\"/\\" element={ /* HomePage component */ } />\\n <Route path=\\"*\\" element={ /* NotFoundPage component */ } />\\n </Routes>\\n </>\\n );\\n}\\n\\n
As discussed, we are using the DummyJSON API to populate views in our app. If you have a backend API of your own, you may use it instead with the required changes in types and URL endpoints.
\\nHere’s the URL the API provides us to fetch a list of products:
\\nhttps://dummyjson.com/products/?limit=10&skip=0\\n\\n
The limit
and skip
keys in the API URL determine the number of products to load and skip respectively. Both these search parameters work together to achieve different sets of data. The structure of the response we receive upon requesting this URL looks something like this:
Based on this JSON schema, we can construct types to handle different kinds of data in our app. You can also use tools like JSON2TS to convert the JSON schema instantly into TypeScript types:
\\n// src/types/product.ts\\n\\n// A single product\\nexport interface Product {\\n id: number;\\n title: string;\\n description: string;\\n price: number;\\n ...\\n}\\n\\n// A collection of products with additional response info\\nexport interface ProductsResponse {\\n products: Product[];\\n total: number;\\n skip: number;\\n limit: number;\\n}\\n\\n// Types for Query params\\nexport ProductQueryParams {\\n limit?: string;\\n skip?: string;\\n}\\n\\n
Some types will be used repeatedly throughout the project. We should group such types based on relevance and maintain them separately for better organization.
\\n\\nLet’s declare some configuration options before moving to the API logic part. These values should go right into the api.ts
and pagination.ts
files of the config folder:
// src/config/api.ts\\n\\nexport const API_CONFIG = {\\n BASE_URL: \\"https://dummyjson.com\\",\\n ENDPOINTS: {\\n PRODUCTS: \\"/products\\",\\n PRODUCTS_BY_CATEGORY: \\"/products/cateogry\\",\\n PRODUCT_CATEGORIES: \\"/products/categories\\",\\n },\\n buildUrl: (endpoint: string) => `${API_CONFIG.BASE_URL}${endpoint}`,\\n};\\n\\n// src/config/pagination.ts\\nexport const PAGINATION_CONFIG = {\\n ITEMS_PER_PAGE: 9,\\n INITIAL_ITEMS_TO_SKIP: 0,\\n};\\n\\n
The above definitions are self-explanatory and will provide crucial data like API URLs, the number of products per page, and more. The value 9
for ITEMS_PER_PAGE
will facilitate the construction of a 3×3 product card grid later.
Using these configuration options, we can set up utility functions to neatly construct our API URLs:
\\n// src/utils/getApiUrls.ts\\n\\nexport const getProductsUrl = () =>\\n API_CONFIG.buildUrl(API_CONFIG.ENDPOINTS.PRODUCTS);\\n\\n
After declaring types, we should create a new file in the apis
directory, name it productApi.ts
, and define an object called productApi
in it. This object will act as an abstraction layer over the underlying DummyJSON API.
Since this object will contain endpoint functions that communicate with the API to bring us the data we need to show on the frontend, we can call it the API wrapper or API client. We may now define separate methods inside it to load a list of products, an individual product, categories, etc.
\\nLet’s define getProducts
, which takes an optional object (params
) of type ProductQueryParams
as an argument. With the limit
and skip
properties of params
, we can construct a query string and attach it to the API URLs for loading the data in a paged fashion:
// src/apis/productApi.ts\\n\\nexport const productApi = {\\n async getProducts(params?: ProductsQueryParams) {\\n const queryParams = new URLSearchParams();\\n\\n // Add pagination params\\n queryParams.append(\\n \\"limit\\",\\n (params?.limit ?? PAGINATION_CONFIG.ITEMS_PER_PAGE).toString()\\n );\\n queryParams.append(\\n \\"skip\\",\\n (params?.skip ?? PAGINATION_CONFIG.INITIAL_ITEMS_TO_SKIP).toString()\\n );\\n\\n const response = await fetch(`${getProductsUrl()}?${queryParams}`);\\n if (!response.ok) {\\n throw new Error(\\n `API Error: ${response.status} - failed to load products.`\\n );\\n }\\n\\n return response.json() as Promise<ProductResponse>;\\n }\\n};\\n\\n
The above definition of the getProduct
method illustrates the use of native JavaScript’s URLSearchParams
object for constructing a query string (queryParams
) with the limit and skip values.
It then constructs the required API URL with queryParams
, uses the fetch API to get a response from the DummyJSON server, and returns the JSON data as a promise after the basic error checking.
We have a choice to use this API method directly in our components. However, using a custom hook to construct the data and paging logic is a better approach.
\\nThis custom hook communicates with our API client using certain search parameters of our app URL and generates loading, error, and data states accordingly.
\\nWith the useSearchParams
Hook from the React Router DOM library, we can grab the value of search parameters from the query string of the app URL. In this case, we need the value of the page
parameter, which, if not found, defaults to 1
. This means the first page is always shown when the page
search parameter is not found:
// src/hooks/useProducts.ts\\n\\nexport function useProducts(limit: number) {\\n const [searchParams, setSearchParams] = useSearchParams();\\n const currentPage = Number(searchParams.get(\\"page\\")) || 1;\\n}\\n\\n
When managing data, loading, and error states, the first thing that comes to mind is a big pile of useState
Hooks. With TanStack Query, we don’t need all that boilerplate code to manage states. On top of that, we can add caching, conditional loading, error handling, and prefetching support right out of the box:
// src/hooks/useProducts.ts\\n\\nexport function useProducts(limit: number) {\\n // Previously declared states...\\n\\n const { data, isLoading, error } = useQuery({\\n queryKey: [\\"products\\", { limit, skip }],\\n queryFn: () => productsApi.getProducts({ limit, skip })\\n });\\n}\\n\\n
When setting up a TanStack query, we provided an identity to the query with queryKey
. We then gave queryFn
a reference to our getProducts
API function to load the data. If limit or skip values change, the query will automatically re-call the API function. You may specify other properties to the query to optimize it your way.
We can then calculate the total number of pages (totalPages
) by dividing the total
value available through the API response by limit
we are using as a parameter for the useProducts
Hook.
Defining a function to facilitate pagination is apt here, as we have all the info we need to control the paging and navigation between pages. The totalPages
and currentPage
values will help us formulate a pagination logic:
// src/hooks/useProducts.ts\\n\\nexport function useProducts(limit: number) {\\n // Previously declared states...\\n\\n const totalPages = Math.ceil((data?.total || 0) / limit);\\n\\n const goToPage = useCallback(\\n (page: number) => {\\n if (page >= 1 && page <= totalPages) {\\n setSearchParams((prev) => {\\n const params = new URLSearchParams(prev);\\n params.set(\\"page\\", page.toString());\\n return params;\\n });\\n }\\n }, [totalPages, setSearchParams]\\n );\\n}\\n\\n
The useProducts
Hook returns a long list of states, values, and methods that we can later access in a component:
// src/hooks/useProducts.ts\\n\\nexport function useProducts(limit: number) {\\n // ...\\n\\n return {\\n products: data?.products || [],\\n total: data?.total || 0,\\n isLoading,\\n error,\\n currentPage,\\n totalPages,\\n goToPage,\\n hasNext: currentPage < totalPages,\\n hasPrevious: currentPage > 1,\\n goToNext: () => currentPage < totalPages && goToPage(currentPage + 1),\\n goToPrevious: () => currentPage > 1 && goToPage(currentPage - 1),\\n };\\n}\\n\\n
With the API client and useProducts
Hook to do all the heavy lifting, the structure and functioning of components will be pretty straightforward.
Let’s create three components in the components/product
directory that handle the rendering of individual cards, a grid of such cards, and the pagination of the card grid.
The ProductCard
component takes the product data as a prop and uses its properties, such as title, name, price, thumbnail, etc., to give the card some identity. Note that this component will be part of the ProductGrid
component, which we will define in the next segment:
// src/components/product/ProductCard.tsx\\n\\nexport function ProductCard({ product }: { product: Product }) {\\n return (\\n <article className=\\"...\\">\\n <div className=\\"...\\">\\n <h2 className=\\"...\\">\\n {product.title}\\n </h2>\\n\\n <div className=\\"...\\">\\n <p className=\\"...\\">\\n ${product.price}\\n </p>\\n </div>\\n </div>\\n </article>\\n );\\n}\\n\\n
Next, define a ProductGrid
component that uses a collection of product data, allowing us to loop over it and assign the required data from each item to the ProductCard
component:
// src/components/product/ProductGrid.tsx\\n\\nexport default function ProductGrid({ products }: { products: Product[] }) {\\n return (\\n <div className=\\"...\\">\\n {products.map((product) => (\\n <ProductCard key={product.id} product={product} />\\n ))}\\n </div>\\n );\\n}\\n\\n
The ProductPagination
component efficiently controls the paged navigation using several props. It will receive all values for these props later in the ProductPage
component through the useProducts
Hook:
// src/components/product/ProductPagination.tsx\\n\\ninterface PaginationProps {\\n currentPage: number;\\n totalPages: number;\\n onNext: () => void;\\n onPrevious: () => void;\\n hasNext: boolean;\\n hasPrevious: boolean;\\n isLoading: boolean;\\n}\\n\\nexport default function Pagination({\\n currentPage,\\n ...\\n isLoading,\\n}: PaginationProps) {\\n return (\\n <nav\\n className=\\"...\\"\\n aria-label=\\"Pagination\\"\\n >\\n <button\\n onClick={onPrevious}\\n disabled={!hasPrevious || isLoading}\\n aria-label=\\"Previous page\\"\\n >\\n Previous\\n </button>\\n\\n <span className=\\"text-sm text-gray-700\\">\\n Page {currentPage} of {totalPages}\\n </span>\\n\\n <button\\n onClick={onNext}\\n disabled={!hasNext || isLoading}\\n aria-label=\\"Next page\\"\\n >\\n Next\\n </button>\\n </nav>\\n );\\n}\\n\\n
Let’s put all these pieces together in the ProductsPage
component. We grab all the necessities using the useProducts
Hook first, and then provide these values appropriately to the ProductGrid
and ProductPagination
components as shown below:
// src/components/pages/ProductPage.tsx\\n\\nexport default function ProductsPage() {\\n const {\\n products,\\n currentPage,\\n ...\\n isLoading\\n } = useProducts(PRODUCTS_PER_PAGE);\\n\\n if (error) {\\n return <div>{error.message}</div>;\\n }\\n\\n return (\\n <main>\\n <div className=\\"...\\">\\n {loading ? (\\n <div>Loading...</div>\\n ) : (\\n <>\\n <ProductGrid products={products} />\\n <Pagination\\n currentPage={currentPage}\\n ...\\n isLoading={loading}\\n />\\n </>\\n )}\\n </section>\\n </main>\\n );\\n}\\n\\n
One last thing remaining is to add a route in the App
component and point it to the ProductsPage
component. You may also set it to the main path, but I’m using the /products
path because I’m using the main page to explain what this app does. You should also use a fallback component when the requested path doesn’t match any routes we have set up here:
// src/App.tsx\\n\\nexport default function App() {\\n return (\\n <>\\n <Navbar />\\n <Routes>\\n <Route path=\\"/\\" element={<HomePage />} />\\n <Route path=\\"/products\\" element={<ProductsPage />} />\\n <Route path=\\"*\\" element={<NotFoundPage />} />\\n </Routes>\\n </>\\n );\\n}\\n\\n
After running the app, if you navigate to the /products
route, you should see several products and a nice pagination that allows you to move between different product pages. You can now bookmark any of these pages and visit them later to continue navigating from exactly where you left off.
The product pagination part was pretty straightforward. We checked for the page query and worked around it. However, things get more complex when you also want to utilize multiple queries in the query string, which should also yield the expected results.
\\nOne such case is filtering data based on multiple parameters. The API we are using offers many options to request filtered data. Let’s implement two features to filter our products based on:
\\nProductsGrid
component based on these categoriesImplementing these two filters and making them work together while respecting pagination can be a challenging task.
\\nThe sorting can be achieved using the API URL with sortBy
and order query parameters as shown in the URL below:
https://dummyjson.com/products?sortBy=price&order=asc\\n\\n
The DummyJSON provides sorting in three forms, title, pricing, and rating. The order query is supported only by pricing and title-based sorting.
\\nThis also requires adding sortBy
, order, and category keys in the ProdcutParams
type. You may make it stricter by specifying values for sortBy
. For now, I’m keeping it a string only:
// src/types/product.ts\\n/* Previously added types... */\\n\\nexport type ProductsQueryParams = {\\n limit?: number;\\n skip?: number;\\n sortBy?: string | null;\\n order?: \\"asc\\" | \\"desc\\" | null;\\n category?: string | null;\\n};\\n\\n
If the sortBy
and order
search parameters are provided to the getProducts
method, we will add them to the existing queryParams
object.
Also, the loading of products according to the category is possible from the following URL structure:
\\nhttps://dummyjson.com/products/category/smartphones\\n\\n
We can easily form a separate URL for the category-based loading of products by grabbing the category slug from the query string. The modified getProducts
method would look something like this:
// src/apis/productApi.ts\\n\\nexport const productsApi = {\\n async getProducts(params?: ProductsQueryParams) {\\n const queryParams = new URLSearchParams();\\n\\n // Add pagination params\\n queryParams.append(\\n \\"limit\\",\\n (params?.limit ?? PAGINATION_CONFIG.ITEMS_PER_PAGE).toString()\\n );\\n queryParams.append(\\n \\"skip\\",\\n (params?.skip ?? PAGINATION_CONFIG.INITIAL_ITEMS_TO_SKIP).toString()\\n );\\n\\n // Add optional sort params\\n if (params?.sortBy && params?.order) {\\n queryParams.append(\\"sortBy\\", params.sortBy);\\n queryParams.append(\\"order\\", params.order);\\n }\\n\\n // Pick the right URL\\n const url = params?.category\\n ? getCategoryProductsUrl(params.category)\\n : getProductsUrl();\\n\\n const response = await fetch(`${url}?${queryParams}`);\\n\\n if (!response.ok) {\\n throw new Error(\\n `API Error: ${response.status}; failed to load products.`\\n );\\n }\\n\\n return response.json() as Promise<ProductsResponse>;\\n },\\n};\\n\\n
Getting a list of all the categories with labels and slugs is possible with the following API URL:
\\nhttps://dummyjson.com/products/cateogories/\\n\\n
Adding a method (getCategories
) to load these categories is simple compared to the getProducts
method:
// src/apis/productApi.ts\\n\\nexport const productsApi = {\\n async getProducts(params?: ProductsParams) { ... },\\n async getCategories() {\\n const response = await fetch(`${PRODUCTS_URL}/categories`);\\n\\n if (!response.ok) {\\n throw new Error(\\n `API Error: ${response.status}; failed to load categories.`\\n );\\n }\\n\\n return response.json() as Promise<ProductCategory[]>;\\n }\\n};\\n\\n
We have to optimize the useProducts
Hook to include sorting and category-based loading of products:
// src/hooks/useProducts.ts\\n\\nexport function useProducts(limit: number) {\\n const [searchParams, setSearchParams] = useSearchParams();\\n\\n const currentPage = Number(searchParams.get(\\"page\\")) || 1;\\n const skip = (currentPage - 1) * limit;\\n const sortBy = searchParams.get(\\"sortBy\\");\\n const order = searchParams.get(\\"order\\") as \\"asc\\" | \\"desc\\" | null;\\n const category = searchParams.get(\\"category\\");\\n\\n const { data, isLoading, error } = useQuery({\\n queryKey: [\\"products\\", { limit, skip, sortBy, order, category }],\\n queryFn: () =>\\n productsApi.getProducts({\\n limit,\\n skip,\\n sortBy: sortBy || null,\\n order,\\n category,\\n }),\\n });\\n\\n // Pagination logic...\\n\\n return { ... };\\n}\\n\\n
Also, we should add a new useCategories
Hook that delivers a list of categories in a label-slug key-value pair, which is pretty simple:
// src/hooks/useCategories.ts\\n\\nexport function useCategories() {\\n return useQuery({\\n queryKey: [\\"categories\\"],\\n queryFn: () => productsApi.getAllCategories(),\\n });\\n}\\n\\n
Let’s set up a Sidebar
component that uses a select combo box to trigger the sorting logic we just established. Adding some configuration for our sort options is a good starting point here. We can iterate through these options to form a select combo box later:
// src/config/sorting.js\\n\\nexport const SORT_OPTIONS = [\\n { label: \\"Title (A-Z)\\", value: \\"title-asc\\" },\\n { label: \\"Title (Z-A)\\", value: \\"title-desc\\" },\\n { label: \\"Price (Low to High)\\", value: \\"price-asc\\" },\\n { label: \\"Price (High to Low)\\", value: \\"price-desc\\" },\\n { label: \\"Rating (High to Low)\\", value: \\"rating-desc\\" },\\n];\\n\\n
In the Sidebar.tsx
file, we can get the sortBy
and order
search parameters from our app URL with the help of the useSearchParams
Hook provided by React Router DOM and use them to prepare the selected value for the select box dedicated to sorting the products:
// src/components/layout/Sidebar.ts\\n\\nexport default function Sidebar() {\\n const [searchParams, setSearchParams] = useSearchParams();\\n\\n const sortBy = searchParams.get(\\"sortBy\\");\\n const order = searchParams.get(\\"order\\");\\n\\n const currentSortValue = sortBy && order ? `${sortBy}-${order}` : \\"\\";\\n}\\n\\n
We can then set up a handler function for the sorting select box to get the input value and set sortBy
and order
query parameters in the app URL using it:
// src/components/layout/Sidebar.ts\\n\\nexport default function Sidebar() {\\n // ...\\n\\n const handleSortChange = (event: React.ChangeEvent<HTMLSelectElement>) => {\\n setSearchParams((prev) => {\\n const params = new URLSearchParams(prev);\\n const [sortBy, order] = event.target.value.split(\\"-\\");\\n if (sortBy && order) {\\n params.set(\\"sortBy\\", sortBy);\\n params.set(\\"order\\", order);\\n } else {\\n params.delete(\\"sortBy\\");\\n params.delete(\\"order\\");\\n }\\n params.set(\\"page\\", \\"1\\");\\n return params;\\n });\\n };\\n}\\n\\n
We can then use the SORT_OPTIONS
to construct our select combo and provide it with the right values we just established above:
// src/components/layout/Sidebar.ts\\n\\nexport default function Sidebar() {\\n // ...\\n\\n return (\\n <aside className={`hidden md:block w-60 flex-shrink-0 ${className}`}>\\n <h2>Sort products</h2>\\n <label htmlFor=\\"sort\\">\\n Sort By\\n </label>\\n <select\\n id=\\"sort\\"\\n value={currentSortValue}\\n onChange={handleSortChange}\\n className={selectClassName}\\n >\\n <option value=\\"\\">Default Sorting</option>\\n {SORT_OPTIONS.map((option) => (\\n <option key={option.value} value={option.value}>\\n {option.label}\\n </option>\\n ))}\\n </select>\\n </aside>\\n );\\n}\\n\\n
Sidebar
componentFinally, let’s add the Sidebar
component to the ProductPage
component:
// components/pages/ProductPage.tsx\\n\\nexport default function ProductsPage() {\\n const { ... } = useProducts(PRODUCTS_PER_PAGE);\\n\\n if (error) {\\n return <div>{error.message}</div>;\\n }\\n\\n return (\\n <main>\\n <Sidebar />\\n <div className=\\"...\\">\\n {loading ? (\\n <div>Loading...</div>\\n ) : (\\n <>\\n <ProductGrid products={products} />\\n <Pagination\\n currentPage={currentPage}\\n totalPages={totalPages}\\n onNext={goToNext}\\n onPrevious={goToPrevious}\\n hasNext={hasNext}\\n hasPrevious={hasPrevious}\\n isLoading={loading}\\n />\\n </>\\n )}\\n </section>\\n </main>\\n );\\n}\\n\\n
Running the app and visiting the /products
path should show you something like the following, where loading and filtering are managed through the URL:
I’ve also implemented the loading of individual products when clicking corresponding card titles in the grid. You should also consider implementing it as an assignment.
\\nYou typically don’t need useCallback
or useMemo
Hooks with TanStack Query, which is highly optimized and handles memoization pretty well by default.
If you choose not to use it, always consider using useCallback
in your hooks to cache a function between re-renders and avoid unnecessary API calls. Also, implement useMemo
if you want to cache a response for some time.
The examples we saw in this tutorial maintain browser history, which can be expensive memory-wise with frequent URL updates. Consider using the useNavigate
Hook and replacing history entries instead of pushing them to the browser history:
import { useNavigate } from \\"react-router-dom\\";\\nconst navigate = useNavigate();\\n// ...\\n\\nnavigate(`?${newParams.toString()}`, { replace: true });\\n\\n
If you expand the app further and implement a product search feature, an API call is made whenever something is typed in the search box. To avoid such frequent updates and rapid URL updates, you should use a pattern like this with Lodash’s debounce method:
\\nimport debounce from \'lodash.debounce\';\\nimport { useSearchParams } from \'react-router-dom\';\\n\\nexport default function Sidebar() {\\n const [searchParams, setSearchParams] = useSearchParams();\\n\\n const updateSearchFilter = useCallback(\\n debounce((searchTerm: string) => {\\n setSearchParams(prev => {\\n const params = new URLSearchParams(prev);\\n params.set(\'search\', searchTerm);\\n return params;\\n });\\n }, 300),\\n [setSearchParams]\\n );\\n\\n return (\\n { /* ... */ }\\n <input \\n type=\\"text\\"\\n onChange={(e) => updateSearchFilter(e.target.value)}\\n placeholder=\\"Search products...\\"\\n />\\n );\\n}\\n\\n
This will keep URL states and API calls in sync, avoiding additional load on your app’s frontend as well as its backend.
\\nThe React Router DOM library solves browser accessibility by allowing users to use backward/forward browser navigation, which is absent in React apps by default.
\\nGeneral accessibility is pretty much the same as what we usually do with our React apps; we use ARIA to make the app accessible for screen readers and follow general accessibility practices.
\\nFor URL-based state management, always focus on providing the most commonly used information through the URL. Avoid exposing information you don’t want to provide publicly through the API, which mostly depends on how your backend and its API are built. Keep an eye on API security and its correct implementation.
\\nURLs with different parameters are treated as separate pages by search engines, which is great from an SEO point of view. Search crawlers grab important pages on your app through internal links on your site, therefore consider implementing hyperlinks over input buttons for internal linking.
\\nYou should also consider creating a dynamic XML sitemap for such unique pages and submit it to major search engines for better visibility.
\\nIf a URL parameter doesn’t make any significant or unique change to the content of the rendered page, consider implementing URL canonicalization to avoid duplicate content problems. Here’s an example to add the right canonical URL to the paginated or filtered products using the React Helmet Async package for React:
\\nimport { Helmet } from \'react-helmet-async\';\\nimport { getCategoryProductsUrl, getProductsUrl } from \'@/utils/getApiUrls\';\\n\\nexport default function ProductPage() {\\n const [searchParams] = useSearchParams();\\n const category = searchParams.get(\'category\');\\n\\n return (\\n <>\\n <Helmet>\\n <link \\n rel=\\"canonical\\" \\n href={`${category ? getCategoryProductsUrl(category) : getProductsUrls()}`}\\n />\\n </Helmet>\\n\\n {/* ... */}\\n </>\\n );\\n}\\n\\n
In this guide, we explored managing state with URL and search parameters in React. We covered both simple and complex patterns through a store-like application, the source code of which you can find in this GitHub repo.
\\nWe also briefly examined some SEO, accessibility, and performance considerations for URL-based states.
\\nI hope this tutorial helped you learn something new! If you got stuck anywhere, feel free to share your suggestions and questions in the comment section.
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nif...else
and ternary operators\\n You know how people say, “Programming is basically just a bunch of if...else
decisions”? I couldn’t agree more!
Think about it: almost everything in our code comes down to “if this happens, do that; if not, do something else.” It’s the programming DNA. And if you’re just starting your JavaScript journey, I’m excited to show you a really cool trick that’ll make your if...else
statements cleaner with the JavaScript ternary operator.
A few prerequisites before we get into it:
\\nif…else
conditional statements=== (strict equality)\\n!== (strict inequality)\\n> (greater than)\\n< (less than)\\n>= (greater than or equal)\\n<= (less than or equal)\\n
The goal of this article is to add to your JavaScript knowledge of shortcuts by mastering the ternary operator. We will cover everything from the syntax to real-world examples, so you can write cleaner code that your fellow developers will love.
\\nOne of the most repeated principles in programming is the DRY principle: “Don’t Repeat Yourself.”
\\nIt’s pretty self-explanatory; do not be redundant. If there’s a straightforward way that keeps your code maintainable and readable, use it. That’s exactly why the ternary operator in JavaScript was created: as a shorthand for the if...else
statement.
The JavaScript ternary operator shorthand allows us to write conditional statements in a single line, using three parts (hence the name “ternary”). It is the only JavaScript operator that takes three operands.
\\nBelow is the syntax:
\\ncondition ? doThisIfTrue: doThisIfFalse\\n
Let’s carefully walk through the syntax above, for a better understanding.
\\ncondition
The syntax starts with a condition
. This is where any expression that evaluates to true or false comes in. For example:
age >= 18\\nusername === \\"admin\\"\\nisLoggedIn && hasPermission\\n
?
– The question markNext, we have the question mark, ?
. Think of it as asking a question like, “Then what?”. This question mark separates your condition from your outcomes, and should always come after your condition.
doThisIfTrue
This code runs only when our condition is true. This can either be a value, an expression, or a function call:
\\n\\"You\'re an adult\\"\\ncalculateBonus()\\n
:
– OtherwiseThis comes right after the true outcome, and just before the false outcome of your condition. It essentially means, “Otherwise, do this instead”.
\\ndoThisIfFalse
This code runs if your condition is false. Just like true,
it can take a value, an expression, or a function call.
Now we know the syntax, let’s play around with some examples.
\\nLet’s walk through practical examples and best practices for writing clean, maintainable code using the ternary operator.
\\nWe’ll write a basic code check for website access. Let’s say, for example, we do not want kids below 14 to have access to our social media application.
\\nUsing the traditional if...else
statement, our logic should look like this:
let age = 17;\\nlet message;\\nif (age >= 14) {\\n message = \\"Welcome to the site!\\";\\n} else {\\n message = \\"Sorry, you must be 14 or older\\";\\n}\\n// Output: \\"Welcome to the site!\\"\\n
In our code above, our user is said to be 17 years old. We create a message
variable that will store whatever message is appropriate for their age. Our if...else
statements dictate that if the user’s age is greater or equal to 14 years, they should be welcomed to the site. Otherwise, they should see a friendly message asking them to return when they’re older.
When we console.log()
the message, our output is Welcome to the site!
, because our user is over 14. But using the ternary operator, it looks like this:
let age = 17;\\nlet message = age >= 14 ? \\"Welcome to the site!\\" : \\"Sorry, you must be 14 or older\\";\\n\\nconsole.log(message); // Output: \\"Welcome to the site!\\"\\n
Just like that, we have turned six lines of code into two:
\\nage >= 14
– Our condition?
– Asks, “What happens if the user is greater or equal to 14 years?”\\"Welcome to the site!\\"
– What we show if they’re 14 or older:
– Says that if the user isn’t 14, show them the friendly message, \\"Sorry,you must be 14 or older\\"
Same result, but way cleaner. Let’s test a different age:
\\nage = 13;\\nif (age >= 14) {\\n message = \\"Welcome to the site!\\";\\n} else {\\n message = \\"Sorry, you must be 14 or older go read your books\\";\\n}\\n
The ternary operator looks like this:
\\nage = 13;\\nmessage = age >= 14 ? \\"Welcome to the site!\\" : \\"Sorry, you must be 14 or older go read your books\\";\\n\\n\\n
What will our output be? Because our user is less than 14, they must get the friendly message that reads, Sorry, must be 14 or older go read your books
.
When you’re working with ternary operators, there are many values that JavaScript sees as ‘false-like’. These are called falsy values. While the boolean false
is the most obvious one, there are several other subtle values that will also trigger the second part of your ternary:
// Let\'s see what happens with each of these tricky values:\\n\\nlet userInput = null;\\nlet message = userInput ? \\"Got your input!\\" : \\"No input received...\\";\\n// You\'ll see: \\"No input received...\\"\\n\\nlet cartTotal = 0;\\nlet checkoutStatus = cartTotal ? \\"Ready to pay!\\" : \\"Your cart is empty\\";\\n// You\'ll see: \\"Your cart is empty\\"\\n\\nlet userName = \\"\\";\\nlet greeting = userName ? `Hi ${userName}!` : \\"Hi stranger!\\";\\n// You\'ll see: \\"Hi stranger!\\"\\n\\n// A practical example you might use:\\nconst getUserDisplay = (user) => {\\n return user?.name ? user.name : \\"Anonymous User\\";\\n};\\n
Whenever your condition is null
, NaN
, 0
, an empty string (\\"\\"
), or undefined
, JavaScript will run the code after the:
, instead of what’s after the ?
.
It’s like these values are automatic red flags that tell JavaScript, “Nope, let us focus on Plan B!”. This comes in handy in a real-world scenario when you’re handling user input or checking if data exists.
\\nLet me introduce something called an if...else
– if...else
statement. This statement checks multiple conditions and executes different code blocks based on which condition is true.
In cases where you may want to write an if...else
– if...else
statement, you could easily pull this off with a ternary operator.
Let’s consider a new example. We have a standard ticket price of $20, but we believe in making our events accessible to everyone. Senior citizens 65 and above receive a 50% discount, bringing their ticket price down to $10.
\\nAdults between 18 and 64 pay the regular price of $20, while young people under 18 can enjoy a special youth rate of $12. Let’s go ahead to write the logic:
\\n// Traditional way\\nlet age = 65;\\n\\nlet ticketPrice;\\n\\nif (age >= 65) {\\n ticketPrice = \\"Senior discount: $10\\";\\n} else if (age >= 18) {\\n ticketPrice = \\"Full price: $20\\";\\n} else {\\n ticketPrice = \\"Student discount: $12\\";\\n}\\n// Output: \\"Senior discount: $10\\"\\n
In our code, we assume the user is a senior citizen, so they get the discount. Using the ternary operator, we have:
\\n// Nested ternary way (use carefully!)\\nlet age = 65;\\nlet ticketPrice = age >= 65 ? \\"Senior discount: $10\\"\\n : age >= 18 ? \\"Full price: $20\\"\\n : \\"Student discount: $12\\";\\n// Output: \\"Senior discount: $10\\"\\n
This example works well. However, when nested operators get too complicated, they counteract the purpose of a ternary operator, which is to make your code more readable. When things get bulky, they become a bit hard to read – which we’ll see in the next section.
\\nWhen using the ternary operator in a single line, the code is not only straightforward; it’s also simpler to read. The question mark is like asking a question about the condition. Let’s take for example the code below:
\\nspeed > 70 ? \\"You get a ticket\\" : \\"You\'re good to go\\";\\n
This is followed by a question mark, saying, is speed greater than 70?
. If true, you get a ticket; if false you don’t.
In cases with a nested condition, the code reads better using the ternary. For example, consider the age ticket logic above. Once we are done with the initial statement (whether someone qualifies for a senior discount), we just use a colon:
instead of “else if.”
This basically means “otherwise, check this.” It flows more naturally, like asking a series of questions:
\\nif (age >= 65) {\\n ticketPrice = \\"Senior discount: $10\\";\\n}\\n
Ternaries do have their limits. When you start nesting multiple conditions inside each other, they can turn into a tangled mess that’ll make your head spin trying to read it.
\\nLet’s say we want to add a membership status and weekend pricing to our ticket system, using the traditional if...else
:
// Traditional `if...else`:\\nif (age >= 65) {\\n if (isMember) {\\n if (isWeekend) {\\n ticketPrice = \\"Senior member weekend: $12\\";\\n } else {\\n ticketPrice = \\"Senior member: $8\\";\\n }\\n } else {\\n if (isWeekend) {\\n ticketPrice = \\"Senior weekend: $15\\";\\n } else {\\n ticketPrice = \\"Senior regular: $10\\";\\n }\\n }\\n } else {\\n ticketPrice = \\"Regular price: $20\\";\\n }\\n \\n // Now the ternary version - watch as this gets wild!\\n\\n let ticketPrice = age >= 65\\n ? isMember\\n ? isWeekend\\n ? \\"Senior member weekend: $12\\"\\n : \\"Senior member: $8\\"\\n : isWeekend\\n ? \\"Senior weekend: $15\\"\\n : \\"Senior regular: $10\\"\\n : \\"Regular price: $20\\";\\n\\n
In the code above, we’re checking three things: age, membership, and whether or not it’s the weekend.
\\nIn the if...else
version, you can follow the logic by reading each block. But the ternary version resembles a pyramid of question marks and colons. Yes, it has fewer characters, but to what benefit?
In cases like this, you might not want to use the ternary operator, as it inhibits readability.
\\nTernary operators are actually very common in React components because they work and look great with JSX. Let’s go through some examples of conditional renderings in React.
\\nThe code below shows a traditional if...else
approach, where we check if a user is logged in and return different welcome messages accordingly:
function UserGreeting({ isLoggedIn, username }) {\\n if (isLoggedIn) {\\n return <h1>Welcome back, {username}!</h1>;\\n } else {\\n return <h1>Please log in</h1>;\\n }\\n}\\n
Instead of this, we can write:
\\nfunction UserGreeting({ isLoggedIn, username }) {\\n return (\\n <h1>\\n {isLoggedIn ? `Welcome back, ${username}!` : \\"Please log in\\"}\\n </h1>\\n );\\n}\\n
In the code above we achieved exactly the same result by using the ternary operator. It’s even slicker at handling conditional styles or classes. This right here is very common:
\\nfunction Button({ isActive }) {\\n return (\\n <button\\n className={isActive ? \\"bg-blue-500\\" : \\"bg-gray-300\\"}\\n >\\n {isActive ? \\"Active\\" : \\"Inactive\\"}\\n </button>\\n );\\n}\\n
The React button component above takes a single prop isActive
and uses ternary operators to toggle both its background color (between blue and gray) and its text content (between \\"Active\\"
and \\"Inactive\\"
) based on whether isActive
is true or false.
We can handle a loading state too:
\\nfunction DataDisplay({ isLoading, data }) {\\n return (\\n <div>\\n {isLoading\\n ? <span>Loading...</span>\\n : <div>{data.map(item => <p>{item}</p>)}</div>\\n }\\n </div>\\n );\\n}\\n
if...else
and ternary operatorsDuring this article, we have seen the advantages and pitfalls of both the traditional if...else
statement and the ternary operator. Let’s compare them directly:
\\n | if…else Statements | \\nTernary Operators | \\n
Readability | \\nSimple and clear to read and understand | \\nCan become unreadable when multiply-nested | \\n
Code Length | \\nTakes up more lines of code | \\nClean, one-line code | \\n
Best Use Case | \\nSuitable for complex logic with multiple conditions | \\nIdeal for simple conditional assignments | \\n
Execution Scope | \\nGreat for executing multiple lines of code | \\nWorks well for inline JSX in React | \\n
Skill Level Required | \\nFamiliar to all levels of developers | \\nCan confuse newbies in JavaScript | \\n
Potential Downsides | \\nLooks verbose for very simple conditions; sometimes feels like overkill for simple checks | \\nEasy to abuse with complex conditions | \\n
Overall Practicality | \\nPreferred for clarity and structured logic | \\nBest for concise, straightforward conditions | \\n
The ternary operator should be compatible with all browsers below. This chart comes courtesy of Can I Use:
\\nIn this article, we walked through the best practices of using the ternary operator in JavaScript, its advantages, and its pitfalls. I will leave you a little advice of mine; if you have to think twice about whether a ternary is readable, it’s probably time to use an if…else
statement instead. Keep it simple, and keep coding!
Would you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nWhen building a React Native app, choosing the right UI components can dramatically speed up development and ensure a polished, platform‐consistent design. This is where React Native UI libraries come in — they provide pre-built, ready-to-use UI elements like buttons, input fields, and modals, helping developers create beautiful and functional interfaces without starting from scratch.
\\nUnlike general component libraries, which may include utilities for animations, forms, or other functionalities, UI libraries focus specifically on visual components that align with platform design guidelines. Popular options like glustack (formerly NativeBase) and React Native UI Kitten offer customizable, production-ready UI kits that streamline the development process.
\\nIn this article, we’ll explore the 10 best React Native UI libraries — comparing features, theming support, and use cases — to help you choose the right tools for your next project.
\\nUpdate history:
\\nReact Native UI libraries offer predeveloped components that help accelerate project delivery. For example, developers can create icon buttons with react-native-vector-icons. Using a UI library with a complete UI kit eliminates the need to write custom styles for built-in UI elements or install multiple third-party components. UI libraries typically provide a collection of customizable UI elements for building modern apps.
\\nWith so many great options available, choosing the right React Native UI library can be challenging. However, understanding each library’s components, features, limitations, and developer support makes it easier to select one based on your design goals.
\\nThe following open source React Native UI libraries can enhance your development process by improving efficiency and ensuring a consistent user experience across platforms like iOS and Android.
\\nBelow is a quick comparison table of the libraries covered in this article:
\\nLibrary | \\nBest for | \\nTheming support | \\nweb support | \\nLive example | \\nUnique features | \\n
---|---|---|---|---|---|
gluestack UI | \\nHighly customizable UI with Tailwind-like styling | \\nUses Tailwind CSS utilities for styling | \\nYes | \\nYes | \\nHighly flexible UI components using Tailwind-like styling | \\n
Tamagui | \\nPerformance-focused custom UI components | \\nCross-platform, scalable theming | \\nYes | \\nYes | \\nPerformance-optimized UI, supports complex designs | \\n
React Native Paper | \\nMaterial Design-based UI components | \\nLight & Dark themes | \\nYes (React Native Web) | \\nYes | \\nBabel plugin to reduce bundle size, Material Design components | \\n
React Native Elements | \\nGeneral-purpose UI components with customization | \\nCustom themes with ThemeProvider | \\nYes (React Native Web) | \\nYes | \\nFlexible customization, reduces boilerplate code | \\n
React Native UI Kitten | \\nEva Design System-based UI components | \\nLight & Dark themes | \\nYes (React Native Web) | \\nYes | \\nSupports right-to-left writing system, Eva Design System-based UI | \\n
RNUIlib | \\nModern, animated UI components | \\nSupports theming | \\nYes | \\nYes | \\nAnimated components, modern UI elements | \\n
Shoutem UI | \\nComposable UI components with predefined styles | \\nSupports theming with Shoutem Themes | \\nYes | \\nYes | \\nCSS-like styling, animation components for complex UI | \\n
Lottie for React Native | \\nSmooth animations with Lottie | \\nN/A | \\nN/A | \\nYes | \\nAirbnb’s Lottie animations, JSON-based animated graphics | \\n
React Native Maps | \\nCustomizable map components | \\nN/A | \\nN/A | \\nYes | \\nMapView, Polygons, Polylines, Animated map elements | \\n
React Native Gifted Chat | \\nPre-built chat component | \\nCustomizable UI | \\nN/A | \\nYes | \\nPre-built chat component with customizable UI, quick replies | \\n
gluestack UI, previously known as NativeBase, is a performance-focused library designed for speed and efficiency across web and mobile apps using React and React Native:
\\nIt offers a collection of 30+ pre-built, customizable components along with styling utilities that accelerate development while ensuring design consistency.
\\ngluestack UI is easy to use—simply copy and paste entire components into your app—yet still allows full control to tailor each UI element to your specifications.
\\nVersion 2 of gluestack UI uses Tailwind CSS utility classes in conjunction with NativeWind’s styling engine for unparalleled flexibility.
\\nTamagui is a performance-focused, customizable React Native component library and styling solution that offers a unique approach to building performant and scalable React Native applications.
\\nUnlike traditional component libraries, Tamagui focuses on providing a foundation for building custom components rather than offering a pre-built set of UI components. This gives developers greater control over the look and feel of their applications. Tamagui is also easy to get started with; you can learn more in the article Tamagui for React Native: Create faster design systems.
\\nKey features of Tamagui include its platform-agnostic nature, allowing developers to write code once and deploy seamlessly across web and mobile platforms.
\\nReact Native Paper is a cross-platform React Native UI library that is based on Google’s Material Design. Developed by the official React Native development partner Callstack, React Native Paper has theming support and offers customizable and production-ready components.
\\nWhen using this React Native UI library, you can reduce its bundle size by using a Babel plugin that allows you to optionally require modules. This will exclude all the modules that your app doesn’t use and rewrite the import statements to include only those that are imported in the app’s component files. React Native Paper also supports web using React Native Web.
\\nHow do you use React Native Paper themes? Applying themes to a particular component is easy; React Native Paper comes with two default themes, namely light
and dark
, which you can extend. It uses the react-native-vector-icons library to support and use icons correctly in buttons, floating action buttons, lists, and more.
Applying themes to a particular component is easy; React Native Paper comes with two default themes, namely light
and dark
, which you can extend. It uses the react-native-vector-icons library to support and use icons correctly in buttons, floating action buttons, lists, and more.
One of the oldest and easiest libraries to start with, React Native Elements is a cross-platform UI library that implements Material Design. Instead of following an opinionated design system, this toolkit offers a more basic structure through its generalized inbuilt components, meaning you’ll have more control over how you want to customize components. Customization of any component in this library will include a mixture of some custom props, as well as props from the React Native core API.
\\nThat being said, when using this React Native UI library, I’ve found that I can write much less boilerplate code than I do when using some of the other libraries covered in this post. The applications built using this UI toolkit also look and feel universal across both iOS and Android platforms.
\\n\\nThemeProvider
offers support for theming. Unlike some of the other libraries, which give you both light and dark themes, you’ll have to define your themes to make them work with React Native Elements. You can also use React Native Elements in web projects by using React Native Web.
There are more than 20 essential UI components that you can use with UI Kitten, and it is also one of the few UI libraries that offers support for the right-to-left writing system for all of its components, a fact to be noted for global apps. It also has support for the web.
\\nIf you set up the UI Kitten library for an existing project, you’ll have to go through some configuration steps. For new projects, you can easily use a pre-developed app template. Make sure to give its design system a read to understand the design principles first.
\\nWell-maintained and used by Wix, RNUIlib is a library for building amazing React Native apps:
\\nIt supports both older and the latest React Native versions, and it provides more than 20 customized components, some of which, like Drawer
, can be easily integrated for building modern swipeable lists, like the Gmail app’s inbox. It also has custom animated components, like an animated scanner, which is useful for indicating progress for a card, such as an uploading status, as well as an animated image.
RNUIlib is another UI library that supports the right-to-left writing system, and it includes full accessibility support.
\\nIf you’re in the market for a professional-looking React Native UI library for your iOS or Android apps, then the Shoutem UI kit is a great choice:
\\nShoutem UI is an open source library that is a part of the Shoutem UI toolkit.
\\nShoutem UI consists of more than 25 composable and customizable UI components that come with pre-defined styles that support other components. You can build complex UIs by combining them. You can also apply custom CSS-like styling using the Shoutem themes library and animations using the animation components library like ZoomIn
, FadeIn
, etc.
Lottie React Native is an excellent open source animated graphic library developed by Airbnb for creating beautiful animations:
\\nThe Lottie community provides featured animations that you can use freely for React Native iOS or Android applications. You can also create custom animations using Adobe After Effects. Lottie then uses the Bodymovin extension to export the custom animations to JSON format and render them in the native mobile app. Because of the JSON export format, your app will have great performance.
\\nThe lottie-react-native package includes the Lottie
component, which you can use to add Lottie animations in React Native apps. Internally, it uses lottie-android and lottie-ios to render Lottie-formatted files natively on Android and iOS, respectively.
React Native Maps is another useful library that provides customizable map components for your iOS and Android apps:
\\nIts components include:
MapView
Marker
Polygon
Polyline
Callout
Circle
HeatMap
Geojson
Overlay
With these components, you can offer your users many different experiences on the map. Additionally, you can combine the components with the Animated API to give an animated effect for the components. For example, you can animate the zoom, marker views, and marker coordinates, and also render polygons and polylines on the map.
\\nKeep in mind that React Native Maps v1.14.0 and above require React Native ≥v0.74, while versions below 1.14.0 are compatible only with React Native ≥v0.64.3. Be sure to update your React Native version if you plan to use React Native Maps with an older project.
\\nIn some development scenarios, React Native developers add chat screens to their mobile apps. For example, integrating a chatbot or implementing an inter-user chat system requires a chat component that includes incoming and outgoing messages with avatars, a text input for typing, and a send button.
\\nThe React Native Gifted Chat library offers a pre-developed customizable chat component that you can use without having to build one from scratch:
\\nThis chat component library comes with features like a highly customizable UI, useful event handlers such as onPressAvatar
and onInputTextChanged
, a typing indicator, quick reply options, and composer actions for attaching photos.
When discussing UI development in React Native, it’s important to differentiate between a UI library and a component library, as these terms are often used interchangeably.
\\nA UI library in React Native provides a set of prebuilt, ready-to-use components that help developers build apps faster. Instead of creating UI elements like buttons, input fields, or modals from scratch, developers get fully designed and functional components that follow platform-specific styles. This saves time and ensures a consistent app design.
\\n\\nA component library, on the other hand, is a broader category. It includes UI libraries but also encompasses UI kits, form builders, and specialized tools for handling animations, charts, or drag-and-drop interfaces. Examples include libraries for animations, charts, or drag-and-drop interfaces.
\\nSome great examples of React Native UI libraries are NativeBase and UI Kitten, while libraries like Lottie and Tamagui better fit the component library description.
\\nThe best React Native component library depends on your specific project needs. When multiple component libraries meet your design or development requirements, selecting one with strong developer support, an active development timeline, and comprehensive documentation is key.
\\nAll the component libraries in this list are actively maintained and are designed to speed up development by providing efficient, ready-to-use components. As long as you have a clear vision for your UI design, any of these libraries should work well.
\\nYou can find more third-party, open source UI component libraries in the awesome-react-native GitHub repository. For additional guidance, check out the official React Native docs or this guide on using React Native components.
\\nDo you have a favorite React Native component library? Let us know in the comments!
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nIn modern system engineering and programming, containers are a widely used tool to package and distribute software. In the most general terms, a container is a standalone piece of software that includes everything it needs to run (code, dependencies, tools, runtime, and so forth). Containers can be easily deployed and run by anyone, making the development and deployment process easier and faster.
\\nOne of the most widespread container platforms is Docker, a monolithic tool. It handles every aspect of the containerization process, from building, running, and inspecting container images.
\\nIn this article, we’ll cover a few Docker alternatives. Each tool covered in this tutorial adheres to the Open Containers Initiative (OCI) specification, which includes specifications for container runtime, container distribution, and container images. Let’s get started!
\\nEditor’s note: This article was updated by Matteo Di Pirro in February 2025 to expand coverage of Docker alternatives and include new information on additional alternatives, including CRI-O.
\\nPodman, a container engine developed by RedHat, is one of the most prominent Docker alternatives for building, running, and storing container images. Podman maintains compatibility with the OCI container image spec just like Docker, meaning Podman can run container images produced by Docker and vice versa.
\\nPodman is the default container engine in RedHat 8 and CentOS 8.
\\nPodman’s command line interface is identical to Docker’s, including the arguments. As a matter of fact, we can simply alias the docker
command to podman
without noticing the difference, making it easy for existing Docker users to transition to Podman:
# .bashrc\\nalias docker=podman\\n\\n
Unlike Docker, which uses the dockerd
daemon to manage all the containers under its control, Podman is daemonless. Therefore, there’s no persistent connection to some long-living process, removing the single point of failure issue in Docker, where an abrupt crash in the daemon process can kill running containers or cause them to become orphaned.
In simpler terms, Docker has a client-server logic with a daemon in between. Podman, on the other hand, does not need the mediator and its architecture is therefore more lightweight and secure.
\\nTo run and manage containers, Podman relies on systemd
.
Podman also differentiates from Docker by using rootless containers by default. Root access is not necessary for launching and operating a container, but it helps to mitigate potential vulnerabilities in the container runtime that can cause privilege escalation.
\\nTo be fair, Docker supports a rootless mode as well, which debuted as an experimental feature in Docker Engine v19.03 before being stabilized in v20.10. However, its use is not yet widespread in the ecosystem. Furthermore, Podman got there first.
\\nThis doesn’t necessarily mean that Podman is safer than Docker. However, rootless containers are safer than containers with root privileges. Furthermore, Docker daemons have root privileges, which makes them more suitable for an attacker. Still, Podman can run root containers, so it’s not immune from the problem.
\\nAn additional feature of Podman that is not yet present in Docker is the ability to create and run pods. A pod is a collection of one or more containers that utilize a shared pool of resources and work closely together to achieve a specific function. Pods are also the smallest execution unit in Kubernetes, making the transition to Kubernetes easier, should the need arise. For example, we might have a pod running a backend and a frontend container, sharing resources and running different containers for the same application.
\\nLastly, Podman is not an all-in-one solution like Docker. For example, it does not support Docker Swarm, even if it has recently introduced Docker Compose support to be compliant with Docker Swarm. Furthermore, Podman specializes in running containers. To build them, it needs another tool, named Buildah (see below). Lastly, Podman is part of the RedHat OpenShift Container Platform.
\\nIn conclusion, there’s no winner between Podman and Docker. The former might be more suitable when we need a specialized lightweight tool to run containers, but the latter is an all-in-one solution. In many cases, Podman can replace Docker. Therefore, when choosing one over the other, always consider your requirements.
\\n\\nBuildah is a Docker alternative for building images. Developed by RedHat, Buildah is often used together with Podman. In fact, Podman uses a subset of Buildah’s functionality to implement its build
subcommand.
Buildah is a great tool if we need fine-grained control over the whole image-building process thanks to the Buildah CLI tool. This is very important when we want to optimize our images or when we work with complex builds.
\\nSimilarly to Podman, the images produced by Buildah fully comply with the OCI specification, operating in the same manner as images built with Docker. Buildah can also create images using an existing Dockerfile
or Containerfile
. Unlike Docker, however, Buildah lets us use Bash scripts that sidestep the limitations of Dockerfiles, automating the process more easily. Bash scripting uses the commands of Buildah CLI to install packages, copy files, and configure the layers of an image. It gives us more control, but, in turn, the learning curve is steeper.
Like Podman, Buildah follows a fork-exec model that doesn’t require a central daemon or root access to work. Thus, all operations are executed directly by the Buildah CLI, which then interacts with the container runtime (e.g. runc). This makes it more lightweight than Docker as well as potentially more secure.
\\nOne advantage of using Buildah over Docker is its ability to commit many changes to a single layer. This was a long-requested feature among container users. Buildah also provides the ability to create an empty container image storing only metadata, making it easy to add only the required packages in the image. Consequently, the final output is smaller than its Docker equivalent.
\\nBuildah images are user-specific, so only the images built by a user will be visible to them.
\\nAs for the use cases, Docker is ideal for developers needing a fast way to produce images. Simply write a Dockerfile and let Docker take care of the rest. Buildah, on the other hand, is for advanced users who like to get their hands dirty. Whenever we need fine-grained control over the build process, Buildah is a perfect fit.
\\nAt the time of writing, Buildah works on several Linux distributions but is not supported on Windows or macOS.
\\nBuildKit is an improved image-building engine for Docker that was developed as part of the Moby project. It is the default builder for users on Docker Desktop and Docker Engine as of version 23.0, but it also comes as a standalone tool.
\\nOne of BuildKit’s headline features includes improved performance through parallel processing of image layers that don’t depend on each other. Another feature is better caching, which reduces the need to rebuild each layer of an image. This tool offers extensibility through a more pluggable architecture. Finally, BuildKit introduces rootless builds and the ability to skip unused stages.
\\nIf we’re using Docker Engine < v23, we can optionally enable BuildKit by setting the DOCKER_BUILDKIT
environment variable to 1
:
$ DOCKER_BUILDKIT=1 docker build .\\n\\n
Alternatively, we can configure Docker to use BuildKit by default simply by editing the /etc/docker/daemon.json
file as follows:
{\\n \\"features\\": {\\n \\"buildkit\\": true\\n }\\n}\\n\\n
After saving the file, reload the daemon to apply the change:
\\n$ systemctl reload docker\\n\\n
It’s easy to tell when BuildKit is being used because of its output, which differs from the default engine:
\\n$ DOCKER_BUILDKIT=1 docker build . [+] Building 30.8s (7/7) FINISHED => [internal] load build definition from Dockerfile 0.1s => => transferring dockerfile: 142B 0.1s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [internal] load metadata for docker.io/library/centos:latest 0.6s => [auth] library/centos:pull token for registry-1.docker.io 0.0s => [1/2] FROM docker.io/library/centos:latest@sha256:a27fd8080b517143cbbbab9dfb7c8571c40d67d534bbdee55bd6 14.3s => => resolve docker.io/library/centos:latest@sha256:a27fd8080b517143cbbbab9dfb7c8571c40d67d534bbdee55bd6c 0.0s => => sha256:a27fd8080b517143cbbbab9dfb7c8571c40d67d534bbdee55bd6c473f432b177 762B / 762B 0.0s => => sha256:a1801b843b1bfaf77c501e7a6d3f709401a1e0c83863037fa3aab063a7fdb9dc 529B / 529B 0.0s => => sha256:5d0da3dc976460b72c77d94c8a1ad043720b0416bfc16c52c45d4847e53fadb6 2.14kB / 2.14kB 0.0s => => sha256:a1d0c75327776413fa0db9ed3adcdbadedc95a662eb1d360dad82bb913f8a1d1 83.52MB / 83.52MB 2.0s => => extracting sha256:a1d0c75327776413fa0db9ed3adcdbadedc95a662eb1d360dad82bb913f8a1d1 10.8s => [2/2] RUN yum -y install httpd 14.7s => exporting to image 1.0s => => exporting layers 1.0s => => writing image sha256:c18170a407ca85218ee83526075a3f2a2e74f27d7bd5908ad68ba2328b4f4783 0.0s\\n\\n
Developed by Google, Kaniko is used to build container images inside of an unprivileged container or a Kubernetes cluster. Like Buildah, Kaniko does not require a daemon, and it can build images from Dockerfiles without depending on Docker.
\\nFor example, here’s a Kubernetes Pod
definition of building an image using Kaniko:
apiVersion: v1\\nkind: Pod\\nmetadata:\\n name: kaniko\\nspec:\\n containers:\\n - name: kaniko\\n image: gcr.io/kaniko-project/executor:latest\\n args: [\\"--dockerfile=<path-to-dockerfile>\\",\\n \\"--context=dir://<path-to-source-code>\\",\\n \\"--destination=<registry>/<image-name>:<tag>\\"]\\n restartPolicy: Never\\n\\n
In the code snippet above, we define a container
, running the kaniko:latest
Docker image. The container’s arguments build a new image, based on a Dockerfile possibly mounted on the pod itself.
The whole process is a bit more cumbersome when run locally. In this case, everything happens in the context of a docker run
command:
docker run \\\\\\n -v /path/to/your/source/code:/workspace \\\\\\n gcr.io/kaniko-project/executor:latest \\\\\\n --dockerfile=/workspace/Dockerfile \\\\\\n --destination=your-registry/your-image-name:your-tag\\n\\n
The major difference between Docker and Kaniko is that the latter is more focused on Kubernetes workflows, and it is meant to be run as an image, making it a bit less suitable for local development. Furthermore, being more focused on running inside Kubernetes containers, Kaniko can only run on Linux.
\\nSimilarly to Docker, Kaniko produces OCI-compliant images, making it a drop-in replacement for Docker-in-Kubernetes use cases.
\\nSpeaking of use cases, Kaniko is particularly suitable for building secure images in environments (aka CI/CD pipelines) when running the privileged mode needed by Docker is a no-go.
\\nRegarding the build process, in Kaniko we can specify a set of directories or files needed during the build. In contrast, Docker needs the entire project directory (the so-called build context) to be sent to the Docker daemon. Consequently, Kaniko builds are generally faster.
\\nSkopeo is another tool developed by RedHat and part of the RedHat OpenShift Container Platform. As such, it is usually adopted along with Podman and Buildah.
\\nMore precisely, Skopeo provides us with a way to inspect Docker images. In particular, the inspect
sub-command returns low-level information about a container image, similar to docker inspect
.
Not surprisingly, and similar to Podman and Buildah, Skopeo doesn’t require a daemon to run nor does it need root privileges. Lastly, it works with OCI-compatible images.
\\nIn contrast to Docker, Skopeo can help us gather useful information about a repository or a tag without having to download it first:
\\n$ skopeo inspect docker://docker.io/fedora # inspect remote image\\n{\\n \\"Name\\": \\"docker.io/library/fedora\\",\\n \\"Digest\\": \\"sha256:72c6c48a902baff1ab9948558556ef59e3429c65697287791be3c709738955b3\\",\\n \\"RepoTags\\": [\\n \\"20\\",\\n \\"21\\",\\n \\"22\\",\\n \\"23\\",\\n \\"24\\",\\n \\"25\\",\\n \\"26\\",\\n \\"26-modular\\",\\n \\"27\\",\\n \\"28\\",\\n \\"29\\",\\n \\"30\\",\\n \\"31\\",\\n \\"32\\",\\n \\"33\\",\\n \\"34\\",\\n \\"35\\",\\n \\"36\\",\\n \\"branched\\",\\n \\"heisenbug\\",\\n \\"latest\\",\\n \\"modular\\",\\n \\"rawhide\\"\\n ],\\n \\"Created\\": \\"2021-11-02T21:29:22.547065293Z\\",\\n \\"DockerVersion\\": \\"20.10.7\\",\\n \\"Labels\\": {\\n \\"maintainer\\": \\"Clement Verna \\\\[email protected]\\\\u003e\\"\\n },\\n \\"Architecture\\": \\"amd64\\",\\n \\"Os\\": \\"linux\\",\\n \\"Layers\\": [\\n \\"sha256:fc811dadee2400b171b0e1eed1d973c4aa9459c6f81c77ce11c014a6104ae005\\"\\n ],\\n \\"Env\\": [\\n \\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\",\\n \\"DISTTAG=f35container\\",\\n \\"FGC=f35\\",\\n \\"FBR=f35\\"\\n ]\\n}\\n\\n
In this regard, Skopeo is simply a replacement for the docker inspect
command, with similar use cases. However, it benefits from enhanced security, as we already saw for Podman and Buildah.
A major use case of Skopeo is the ability to copy a container image from one remote registry to another or a local directory:
\\nskopeo login quay.io\\n\\n$ skopeo copy docker://hello-world:latest docker://quay.io/hello-world:latest\\n\\n
This is a nice-to-have, since Docker Hub has introduced rate limits and paid tier changes.
\\nLastly, another useful feature is Skopeo’s ability to synchronize images between container registries and local directories with the skopeo sync
command.
As we said for Podman and Buildah, Skopeo does not mean to fulfill all the use cases addressed by Docker. On the other hand, adopting the RedHat OpenShift Container Platform would give us a toolbox for all our needs.
\\nDive is not a Docker alternative per se, but it’s surely worth a mention. It’s a tool for inspecting, analyzing, and optimizing container images. Dive can show the content of an image by layer, highlighting the differences between each. Through image analysis, Dive provides us with a percentage score for efficiency by estimating wasted space, which is helpful when trying to reduce the image size.
\\nAnother useful feature is Dive’s CI integration, which provides a pass-or-fail result based on the image’s efficiency and the amount of wasted space. To access the CI integration feature, set the CI
environmental variable to true
when invoking any valid dive
command:
$ CI=true dive node:alpine\\n\\n
In conclusion, Dive is a great tool for learning and developing confidence in what we are building/shipping. We can use it to understand how Docker images work and how to write efficient Dockerfiles. Thanks to it, we can make any changes to the Dockerfile and see how it has affected the resulting layer structure.
\\nrunc is a CLI tool providing a low-level interface to create and run containers on Linux based on the OCI specification. runc was formerly embedded into Docker as a module but was later spun into a standalone tool in 2015. It’s specifically designed to be a lightweight and secure runtime that’s easily integrated with higher-level orchestrators (e.g. Kubernetes).
\\nrunc remains the default container runtime in Docker and most other container engines. An alternative to runc is crun, developed by RedHat and written in C instead of Go like most Linux container tools.
\\nThe main advantages of crun over runc are all about performance. According to RedHat, the crun binary is up to 50 times smaller and up to twice as fast as the runc binary. What’s important is that we can use runc and crun interchangeably, as both implement the OCI runtime specification. crun, however, supports more low-level features that make it the preferred choice if we want to have fine-grained control over the runtime our containers run on. For example, with crun we can set stricter limits on the memory allowed in the container.
\\nLastly, being written in C, crun works on architectures where Go support is limited or absent (e.g. Risc-V).
\\ncrun is Production-ready and we can therefore use it as a runc replacement without worries.
\\nLXD and Docker are not competing container technologies, as they serve different purposes. The former, in particular, is a virtual machine manager and image-based system container. It offers images for a variety of Linux distributions as well as a complete user experience centered on entire Linux systems operating within containers or virtual machines.
\\nLXD runs the so-called system containers, that are similar to virtual/physical machines, as they run a full operating system. Normally, system containers are long-lasting and used to host several applications.
\\nDocker, on the other hand, runs application containers that package and run a single process or a service.
\\nBoth application and system containers share a kernel with the host operating system. The main difference, however, is that the former runs a single application/process, whereas the latter runs a full operating system, providing their users with more flexibility.
\\nLXD offers compatibility for many storage backends and network types, along with the ability to run on hardware such as a laptop or cloud instance.
\\nA component of LXD security and access control is based on group membership. As a root user, we may create an lxd
group and add trusted members or users so that we can communicate with the local daemon and have complete control over LXD.
LXD provides snap packages for many Linux distributions (including Ubuntu, Fedora, Arch Linux, Debian, and OpenSUSE) to facilitate installation. LXD’s most important features are its basic API, instances, profiles, backup, export, and configurability.
\\nBased on what we saw above, LXD and Docker are not competing technologies. In fact, we could run Docker containers in an LXD system container. Generally speaking, the former is more similar to VMWare or KVM hypervisors, even though it is much lighter on resources and without the virtualization overhead.
\\nDocker, on the other hand, abstracts away storage, networking, and logging. It was specifically designed to decouple and isolate individual processes, which can then be scaled independently from the rest of the application or system they are a part of (basically a microservice architecture).
\\ncontainerd is a container runtime created by Docker that handles the lifecycle of a container on a virtual machine. containerd retrieves container images from container registries, mounts storage and enables networking for a container. In other words, Docker builds upon containerd to give developers a more comprehensive experience.
\\ncontainerd, together with Kubernetes, Envoy, Prometheus, and CoreDNS, graduated from the CNCF (Cloud Native Computing Foundation) in February 2019. It is available as a Linux and Windows daemon. Some of its users include eliot, Cloud Foundry, Docker, Firecracker, and Bottlerocket.
\\nThe main containerd features are as follows:
\\nA container is a metadata object in containerd. A container can be associated with resources such as an OCI runtime specification, image, root filesystem, and other features:
\\nredis, err := client.NewContainer(context, \\"redis-master\\")\\ndefer redis.Delete(context)\\n\\n
Namespaces enable several consumers to use the same container without conflict. They offer the advantage of sharing data while maintaining isolation with containers and pictures.
\\nTo provide a namespace for API calls, run the following code:
\\ncontext = context.Background()\\n\\n//create a context for docker\\ndocker = namespaces.WithNamespace(context, \\"docker\\")\\n\\ncontainerd, err := client.NewContainer(docker, \\"id\\")\\n\\n
To provide a default namespace on the client, do the following:
\\nclient, err := containerd.New(address, containerd.WithDefaultNamespace(\\"docker\\"))\\n\\n
containerd provides a complete client package to assist us in integrating it into our platform:
\\nimport (\\n \\"context\\"\\n\\n \\"github.com/containerd/containerd\\"\\n \\"github.com/containerd/containerd/cio\\"\\n \\"github.com/containerd/containerd/namespaces\\"\\n)\\n\\n\\nfunc main() {\\n client, err := containerd.New(\\"/run/containerd/containerd.sock\\")\\n defer client.Close()\\n}\\n\\n
For operating containers, containerd fully implements the OCI runtime specification.
\\nWhen constructing a container, we can indicate how to alter the specification:
\\nredis, err := client.NewContainer(context, \\"redis-master\\", containerd.WithNewSpec(oci.WithImageConfig(image)))\\n\\n
CRI-O is another container runtime implementing the Kubernetes Container Runtime Interface. This means we can use it in our Kubernetes clusters to run containers.
\\nCRI-O is an alternative to containerd and the other container runtimes. Similar to the latter, it pulls container images from registries, manages them on disk, and runs them.
\\nTherefore, their use cases are pretty much the same. Generally speaking, we should choose based on the ecosystem we’re adopting. CRI-O is backed by RedHat and used in the RedHat OpenShift Container Platform.
\\nHence, if are migrating to tools like Podman and Buildah, using CRI-O rather than containerd might be a good choice. For example, you might receive more support from RedHat if you’re paying for it.
\\nDocker Desktop is a fully-featured software allowing Mac and Windows systems to use a Linux virtual machine to run the Docker engine. It enables us to create and share containerized applications and microservices.
\\nIn August 2021, however, Docker Desktop announced changes to its licensing, meaning it would no longer be free for companies with more than 250 employees or over $10 million in revenue.
\\nHowever, there are several alternative approaches to containerization, often in the form of standalone tools, which sometimes offer better results than what Docker Desktop delivers:
\\nRancher Desktop features a built-in GUI and is easy to use. The container runtime used by Kubernetes and Rancher Desktop is similar. Lastly, Rancher Desktop offers container management for building, pushing, and running containers.
\\nMinikube is a way to run Kubernetes clusters locally on Mac, Windows, and Linux. It’s open-source and is designed to be very customizable. For instance, we can configure the use of alternative container runtimes, custom virtual machine images, and support for GPU and other hardware pass-through.
\\nLima is a container management application designed specifically for macOS, but Linux users can enjoy it as well.
\\nIn this article, we’ve described several Docker alternatives for building, running, and distributing container images. Although Docker remains the dominant platform for containerization and container management, it’s good to know about different tools that may work better for our use cases.
\\nReplacing a specific Docker aspect should be fairly seamless because each tool mentioned adheres to the OCI specification.
\\nGenerally speaking, in some cases we don’t really have to choose between Docker and something else. The OCI-compliance of basically all of the Docker alternatives enable us to experiment and to use different tools in different environments. For example, we could use Docker locally, for its simplicity and ecosystem, whereas we could be using containerd in our production environment. Even if (sometimes) limited, Dockerfiles do not couple us to Docker. As we saw, many tools are capable of building images from them.
\\nBe sure to leave a comment if there is any tool you think we missed. Thanks for reading!
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nThe SOLID principles (single responsibility, open/closed, Liskov substitution, interface segregation, and dependency inversion) are essential for writing maintainable, scalable, and flexible software. While many developers are familiar with these principles, fully grasping their application can be challenging.
\\nBy the end of this guide, you will have a clear understanding of the dependency inversion principle (DIP), its importance, and how to implement it across multiple programming languages, including TypeScript, Python, Java, C#, and more.
\\nThis post was updated by Oyinkansola Awosan in February 2025 to explain DIP more conceptually with broader applications, including expanding the scope of the article beyond TypeScript to Java, Python, and C#.
\\nThe dependency inversion principle states that high-level modules should not depend on low-level modules. Instead, both should depend on abstractions. This principle ensures that software components remain loosely coupled, making them easier to modify and maintain.
\\nIn the context of the dependency inversion principle, the high-level module or component provides a high-level abstraction over the tiny implementation details and general functionalities of the system. High-level modules achieve this level of abstraction by not directly communicating with low-level modules. In other words, high-level modules are set up to talk to an interface to interact with the low-level modules to perform specific tasks.
\\nOn the other hand, low-level modules are set up to handle the underlying logic in a system architecture. They are also set up to conform to the interface, which is simply the method or properties a low-level component must have. Therefore, low-level modules depend heavily on interfaces to be useful.
\\nAbstractions in DIP play a significant role in ensuring that high-level and low-level modules are decoupled, making the codebase highly flexible, maintainable, and testable.
\\nWhat does it mean to say decoupled? In the context of the dependency inversion principle, low-level modules and high-level modules are never supposed to depend entirely on each other, but the interface, particularly when reusability is of paramount concern.
\\nPotential issues can arise when low-level and high-level modules are tightly coupled. For example, a change in the former must cause the latter to be updated to effect the change. Furthermore, it becomes problematic if the high-level modules fail to incorporate well with other low-level modules’ implementation details.
\\nTo write high-quality code, you must understand the dependency inversion principle and dependency injection. So, what is the difference between the two? Let’s find out.
\\nFirst, when we talk about dependency and object-oriented programming, we usually refer to an object type, a class that has a direct relationship with it. You can also say this direct dependency means that the class and object type are coupled. For instance, a class could depend on another class because it has an attribute of that type, or an object of that type is passed as a parameter to a method, or because the class inherits from the other class.
\\nDependency injection is a design pattern. The idea of the pattern is that if a class uses an object of a certain type, we do not also make that class responsible for creating that object. Dependency injection design shifts the creation responsibility to another class. This design is, therefore, an inversion of control techniques and makes code testing relatively easier.
\\nDependency inversion, on the other hand, is a principle that aims to decouple concrete classes using abstractions, abstract classes, interfaces, etc. Dependency inversion is only possible if you separate creation from use. In other words, without dependency injection, there is no dependency inversion.
\\n\\nDIP employs a concept that uses high-level modules and low-level modules, where the former contains the business rules that solve the business problem. Clearly, these high-level modules contain most of the business value of the software application. Below are some of the benefits of DIP:
\\nCoupling means how closely two parts of your system depend on or interact with each other. In one sense, it is how much the logic and implementation details of these two parts begin to blend. When two pieces of code are interdependent this way, they are said to be tightly coupled.
\\nLoose coupling, on the other hand, is if two pieces of code are highly independent and isolated from each other. Loose coupling promotes code maintainability because you will find that all the code related to a particular concern is colocated together.
\\nFurthermore, loose coupling provides more flexibility, allowing you to change the internals of one part of your system without those changes spilling over into the other parts. You could even easily swap out one part entirely, and the other part would not be aware of that.
\\nWriting good code lets other people understand it. If you have encountered a codebase that looks poorly written or structured, it is difficult to create a mental model of the code.
\\nThe dependency inversion principle helps to ease updating, fixing, or improving a system since the high-level and low-level modules are loosely coupled, and both only rely on abstractions.
Test-driven development is proven to reduce bugs, errors, and defects while improving the maintainability of a codebase It also requires some additional effort. Testing can be done in two ways: manually or automated.
\\nManual testing involves a human clicking on every button and filling out every form that assigns Jira tickets so the developers can backlog them. This manual testing is not very efficient for large-scale projects, and that is where automated testing comes into play.
\\nAutomated testing is a better approach where developers use testing tools to write code for the sole purpose of testing the main application code in the codebase. In a decoupled architecture, like the one provided by the dependency inversion principle, automated unit testing is relatively easier and faster.
\\nDecoupled architecture ensures that real implementations can be swapped with fake or mock objects since the high-level and low-level modules are decoupled.
\\n\\nScalability is one of the most important key concepts for any system design. It defines how a particular system can handle increased load efficiently without any issues and with zero negative impact on the end users.
\\nSo, how does the dependency inversion principle support easy scalability? This principle’s components are loosely coupled, which means that further implementation details can be added to the codebase without modifying the high-level logic.
\\nAssuming a system was initially built to process payment transactions using only one payment gateway, the dependency inversion principle allows you to add more payment methods without breaking the existing functionality.
\\nReusable components have been discussed since the early days of computers. New software development approaches like module-based development mean that component construction and reuse are back in play.
\\nReusability simply refers to the ability to use the same piece of code or component, in some cases, without duplication. The dependency inversion principle ensures that both the high-level and low-level components do not directly depend on each other but on abstraction, thereby giving developers a shot at reusability. This means that the same high-level logic can be used with different low-level implementations with no issues.
\\nTo put this into context, there can be a high-level logic that implements notification, while the low-level implementation details may be for SMS, email, push notification, or anything else. DIP ensures that there is no need to write notification logic every time; it is as easy as simply swapping low-level implementations.
\\nTo demonstrate DIP in practice, we will cover implementations in various languages:
\\nUse abstract base classes (ABC) to define abstractions:
\\nfrom abc import ABC, abstractmethod\\n\\nclass Database(ABC):\\n @abstractmethod\\n def query(self, sql: str):\\n pass\\n\\n
Implement low-level modules:
\\nclass MySQLDatabase(Database):\\n def query(self, sql: str):\\n print(f\\"Executing MySQL Query: {sql}\\")\\n\\nclass MongoDBDatabase(Database):\\n def query(self, sql: str):\\n print(f\\"Executing MongoDB Query: {sql}\\")\\n\\n
Create a high-level module:
\\nclass UserService:\\n def __init__(self, db: Database):\\n self.db = db \\n\\n def get_user(self, id: int):\\n self.db.query(f\\"SELECT * FROM users WHERE id = {id}\\")\\n\\n
Define an interface:
\\ninterface Database {\\n void query(String sql);\\n}\\n\\n
Implement low-level modules:
\\nclass MySQLDatabase implements Database {\\n public void query(String sql) {\\n System.out.println(\\"Executing MySQL Query: \\" + sql);\\n }\\n}\\n\\n
Create a high-level module:
\\nclass UserService {\\n private Database db;\\n public UserService(Database db) {\\n this.db = db;\\n }\\n public void getUser(int id) {\\n db.query(\\"SELECT * FROM users WHERE id = \\" + id);\\n }\\n}\\n\\n
Define an interface:
\\ninterface Database {\\n query(sql: string): void;\\n}\\n\\n
Implement low-level modules:
\\nclass MySQLDatabase implements Database {\\n query(sql: string): void {\\n console.log(`Executing MySQL Query: ${sql}`);\\n }\\n}\\n\\n
Create a high-level module:
\\nclass UserService {\\n private db: Database;\\n constructor(db: Database) {\\n this.db = db;\\n }\\n getUser(id: number): void {\\n this.db.query(`SELECT * FROM users WHERE id = ${id}`);\\n }\\n}\\n\\n
Spring’s Inversion of Control (IoC) container helps achieve DIP by injecting dependencies at runtime:
\\nimport org.springframework.stereotype.Service;\\nimport org.springframework.beans.factory.annotation.Autowired;\\nimport org.springframework.context.annotation.ComponentScan;\\nimport org.springframework.context.annotation.Configuration;\\nimport org.springframework.context.annotation.AnnotationConfigApplicationContext;\\n\\ninterface Logger {\\n void log(String message);\\n}\\n\\n@Service\\nclass ConsoleLogger implements Logger {\\n public void log(String message) {\\n System.out.println(message);\\n }\\n}\\n\\n@Service\\nclass Application {\\n private final Logger logger;\\n\\n @Autowired\\n public Application(Logger logger) {\\n this.logger = logger;\\n }\\n\\n public void run() {\\n logger.log(\\"Application started\\");\\n }\\n}\\n\\n@Configuration\\n@ComponentScan(\\"com.example\\")\\nclass AppConfig {}\\n\\npublic class Main {\\n public static void main(String[] args) {\\n var context = new AnnotationConfigApplicationContext(AppConfig.class);\\n Application app = context.getBean(Application.class);\\n app.run();\\n }\\n}\\n\\n
In .NET Core, the built-in dependency injection (DI) container makes it easy to implement DIP:
\\npublic interface ILoggerService {\\n void Log(string message);\\n}\\n\\npublic class ConsoleLogger : ILoggerService {\\n public void Log(string message) {\\n Console.WriteLine(message);\\n }\\n}\\n\\npublic class Application {\\n private readonly ILoggerService _logger;\\n\\n public Application(ILoggerService logger) {\\n _logger = logger;\\n }\\n\\n public void Run() {\\n _logger.Log(\\"Application started\\");\\n }\\n}\\n\\n
.NET Core’s built-in DI container ensures that the Application class depends only on the abstraction ILoggerService, making the code modular and testable:
\\nvar builder = WebApplication.CreateBuilder(args);\\nbuilder.Services.AddSingleton<ILoggerService, ConsoleLogger>();\\nbuilder.Services.AddSingleton<Application>();\\nvar app = builder.Build();\\nvar application = app.Services.GetRequiredService<Application>();\\napplication.Run();\\n\\n
The dependency inversion principle has many use cases in different areas of software development. This section will explore several practical use cases and break down the benefits of DIP in each case:
\\nWhile DIP offers significant advantages, misusing it can lead to issues. Here are some common pitfalls and solutions:
\\nDevelopers often struggle to decide when to use DIP and when to simply use direct dependencies. Not every class requires an interface.
\\nAs essential as the dependency inversion principle is in software development, misusing it can create unnecessary complexity. DIP is not always necessary when the variation of services is minimal or implementations do not frequently change, as in a simple payment processing system.
\\nUse DIP when:
\\nUse direct dependencies when:
\\nThese tactics will help you take full advantage of DIP:
\\nThe dependency inversion principle is a powerful concept in software design that enhances flexibility, scalability, and maintainability. By decoupling business logic from implementation details through abstractions, DIP enables testable, reusable, and understandable code. However, like any design principle, its misuse can introduce unnecessary complexity.
\\nIn this guide, we have explored the essentials of the dependency inversion principle, its real-world applications, best practices, and practical implementation across different programming languages. With this knowledge, you are now well-equipped to leverage DIP effectively in your software development projects.
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nHigher-order components (HOCs) powerful patterns in React that allow developers to enhance components by wrapping them with additional functionality. They provide a reusable way to manage cross-cutting concerns, such as authentication, logging, or global state management, without modifying the original component directly.
\\nWhile Hooks have largely replaced HOCs for logic reuse, HOCs still offer unique advantages in certain scenarios, particularly when working with legacy codebases or performing complex component transformations.
\\nUpdate history:
\\nWhen structuring a React application, developers often need to reuse logic across multiple components. Hooks have become the go-to solution for state management and logic encapsulation since their introduction in React 16.8. However, HOCs remain useful in specific scenarios, particularly for complex component transformations and cross-cutting concerns.
\\nA higher-order component is a function that takes a component as an argument and returns a new, enhanced component.
\\nBoth HOCs and Hooks encapsulate stateful logic, but they do so differently and are suited for different use cases.
\\nTo illustrate the difference, let’s compare two implementations of a simple counter feature—one using a HOC and another using a custom Hook.
\\n// HOC that adds counter functionality to a component\\nconst withCounter = (WrappedComponent) => {\\n return function CounterWrapper(props) {\\n const [count, setCount] = useState(0);\\n return (\\n <WrappedComponent \\n count={count}\\n increment={() => setCount(prev => prev + 1)}\\n {...props}\\n />\\n );\\n };\\n};\\n\\n
// Custom Hook that provides counter functionality\\nconst useCounter = () => {\\n const [count, setCount] = useState(0);\\n return {\\n count,\\n increment: () => setCount(prev => prev + 1)\\n };\\n};\\n\\n// Usage\\n\\nconst Counter = () => {\\n const {count, increment} = useCounter();\\n return (\\n <>\\n <button>Increment</button>\\n <p>Clicked:{count}</p>\\n </>\\n)\\n}\\n\\n
Notice that while both approaches achieve similar functionality, the HOC pattern wraps an existing component to enhance it, whereas a custom Hook extracts reusable logic without altering the component hierarchy.
\\nOverall, while both approaches manage state similarly, the HOC is ideal for wrapping and enhancing an existing component without directly modifying it, whereas a custom Hook offers a cleaner solution for sharing stateful logic across multiple components without adding an extra layer.
\\nAccording to React’s documentation, a typical React HOC has the following definition:
\\n“A higher-order component is a function that takes in a component and returns a new component.”
\\nUsing code, we can rewrite the above statement like so:
\\nconst newComponent = higherFunction(WrappedComponent);\\n\\n
In this line:
\\nnewComponent
— The enhanced componenthigherFunction
— A function that enhances WrappedComponent
WrappedComponent
— The base component whose functionality we want to extendFirst, create a function that takes the base component as an argument and returns a new component with added functionality. In a functional HOC, you can use Hooks for state and side effects:
\\nimport React, { useState, useEffect } from \'react\';\\n\\nconst withEnhancement = (BaseComponent) => {\\n return function EnhancedComponent(props) {\\n // HOC-specific logic using hooks\\n return <BaseComponent {...props} />;\\n };\\n};\\n\\n
Inside the EnhancedComponent
function, you can use Hooks to manage state and perform side effects. Hooks like useState
, useEffect
, and useRef
can be used to implement additional behavior:
const withEnhancement = (BaseComponent) => {\\n return function EnhancedComponent(props) {\\n const [count, setCount] = useState(0);\\n\\n useEffect(() => {\\n // Perform side effects here\\n }, [count]);\\n\\n return <BaseComponent count={count} setCount={setCount} {...props} />;\\n };\\n};\\n\\n
To use your functional HOC, wrap a component by passing it as an argument to your HOC function. The result will be a new component with the enhanced functionality:
\\nconst EnhancedComponent = withEnhancement(BaseComponent);\\n\\n
You can use EnhancedComponent
in your application just like any other React component, with the added functionality from the HOC:
function App() {\\n return <EnhancedComponent />;\\n}\\n\\n
In the next segment of the article, we will see React’s HOC concept in action.
\\nLet’s dive into a practical use case for HOCs.
\\nWe first need to create a blank React project. To do so, execute the following commands:
\\nnpx create-react-app hoc-tutorial \\ncd hoc-tutorial #navigate to the project folder.\\ncd src #go to codebase\\nmkdir components #will hold all our custom components\\n\\n
For this article, we will build two custom components to demonstrate HOC usage:
\\nClickIncrease.js
— This component will render a button and a piece of text. When the user clicks the button (an onClick
event), the fontSize
property of the text will increaseHoverIncrease.js
— Similar to ClickIncrease
, but it will listen for onMouseOver
events insteadIn your project, navigate to the components
folder and create these two files. Once done, your file structure should look like this:
Now that we have laid out the groundwork for the project, let’s build our custom components.
\\n\\nIn ClickIncrease.js
, write the following code:
// File: components/ClickIncrease.js\\nimport React, { useState } from \'react\';\\n\\nfunction ClickIncrease() {\\n const [fontSize, setFontSize] = useState(10); // Set initial value to 10.\\n\\n return (\\n <button onClick={() => setFontSize(size => size + 1)}>\\n Increase with click\\n </button>\\n <p style={{ fontSize: `${fontSize}px` }}>\\n Size of font: {fontSize}px\\n </p>\\n );\\n}\\n\\nexport default ClickIncrease;\\n\\n
Next, in HoverIncrease.js
, use the following code:
// File: components/HoverIncrease.js\\nimport React, { useState } from \'react\';\\n\\nfunction HoverIncrease() {\\n const [fontSize, setFontSize] = useState(10);\\n\\n return (\\n <div onMouseOver={() => setFontSize(size => size + 1)}>\\n <p style={{ fontSize: `${fontSize}px` }}>\\n Size of font: {fontSize}px\\n </p>\\n </div>\\n );\\n}\\n\\nexport default HoverIncrease;\\n\\n
Finally, render these components in the main App.js
file:
// File: App.js\\nimport React from \'react\';\\nimport ClickIncrease from \'./components/ClickIncrease\';\\nimport HoverIncrease from \'./components/HoverIncrease\';\\n\\nfunction App() {\\n return (\\n <div>\\n <ClickIncrease />\\n <HoverIncrease />\\n </div>\\n );\\n}\\n\\nexport default App;\\n\\n
Let’s test it out! This is the expected result:
\\nWithin the components
folder, create a file called withCounter.js
. Here, start by writing the following code:
import React from \\"react\\";\\nconst UpdatedComponent = (OriginalComponent) => {\\nfunction NewComponent(props) {\\n//render OriginalComponent and pass on its props.\\nreturn ;\\n}\\nreturn NewComponent;\\n};\\nexport default UpdatedComponent;\\n\\n
Let’s deconstruct this code piece by piece. In the start, we created a function called UpdatedComponent
that takes in an argument called OriginalComponent
. In this case, the OriginalComponent
will be the React element, which will be wrapped.
Then, we told React to render OriginalComponent
to the UI. We will implement enhancement functionality later in this article.
When that’s done, it’s time to use the UpdatedComponent
function in our app. To do so, first go to the HoverIncrease.js
file and write the following lines:
import withCounter from \\"./withCounter.js\\" //import the withCounter function\\n//..further code ..\\nfunction HoverIncrease() {\\n//..further code\\n}\\n//replace your \'export\' statement with:\\nexport default withCounter(HoverIncrease);\\n//We have now converted HoverIncrease to an HOC function.\\n\\n
Next, do the same process with the ClickIncrease
module:
//file name: components/ClickIncrease.js\\nimport withCounter from \\"./withCounter\\";\\nfunction ClickIncrease() {\\n//...further code\\n}\\nexport default withCounter(ClickIncrease);\\n//ClickIncrease is now a wrapped component of the withCounter method.\\n\\n
This will be the result:
\\nNotice that our result is unchanged. This is because we haven’t made changes to our HOC yet. In the next section, you will learn how to share props between our components.
\\nUsing higher-order components, React allows developers to share props among wrapped components.
\\nFirst, add a name
prop in withCounter.js
as follows:
// File: components/withCounter.js\\nconst UpdatedComponent = (OriginalComponent) => {\\n function NewComponent(props) {\\n return <OriginalComponent name=\\"LogRocket\\" {...props} />;\\n }\\n return NewComponent;\\n};\\nexport default UpdatedComponent;\\n\\n
Next, modify the child components to read this prop:
\\n// File: components/HoverIncrease.js\\nfunction HoverIncrease(props) {\\n return (\\n <div>\\n Value of \'name\' in HoverIncrease: {props.name}\\n </div>\\n );\\n}\\nexport default withCounter(HoverIncrease);\\n// File: components/ClickIncrease.js\\nfunction ClickIncrease(props) {\\n return (\\n <div>\\n Value of \'name\' in ClickIncrease: {props.name}\\n </div>\\n );\\n}\\nexport default withCounter(ClickIncrease);\\n\\n
As shown above, HOCs allow developers to efficiently share props across multiple components.
\\n\\nJust like with props, we can share state variables using Hooks within HOCs. This enables us to encapsulate and reuse logic across multiple components.
\\nIn components/withCounter.js
, define an HOC that manages a counter
state and an incrementCounter
function:
// File: components/withCounter.js\\nimport React, { useState } from \'react\';\\n\\nconst withCounter = (OriginalComponent) => {\\n function NewComponent(props) {\\n const [counter, setCounter] = useState(10) // Initialize counter state\\n\\n return (\\n <OriginalComponent\\n counter={counter}\\n incrementCounter={() => setCounter(counter + 1)}\\n {...props}\\n />\\n )\\n }\\n return NewComponent\\n};\\n\\nexport default withCounter;\\n\\n
counter
state is initialized with a value of 10
incrementCounter
function updates the counter valuecounter
and incrementCounter
as props to the wrapped componentModify the HoverIncrease
and ClickIncrease
components to use the shared state and function:
// File: components/HoverIncrease.js\\nimport withCounter from \'./withCounter\'\\n\\nfunction HoverIncrease(props) {\\n return (\\n <div onMouseOver={props.incrementCounter}>\\n <p>Value of \'counter\' in HoverIncrease: {props.counter}</p>\\n </div>\\n )\\n}\\n\\nexport default withCounter(HoverIncrease)\\n// File: components/ClickIncrease.js\\nimport withCounter from \'./withCounter\'\\n\\nfunction ClickIncrease(props) {\\n return (\\n <button onClick={props.incrementCounter}>\\n Increment counter\\n </button>\\n <p>Value of \'counter\' in ClickIncrease: {props.counter}</p>\\n )\\n}\\n\\nexport default withCounter(ClickIncrease)\\n\\n
Here is the expected result:
\\nWhile HOCs are useful for sharing logic across multiple components, they do not share state between different instances of wrapped components. If a shared state is required across multiple components, consider using React’s Context API, which provides an efficient way to manage global state.
\\nEven though our code works, consider the following situation: what if we want to increment the value of counter
with a custom value? Via HOCs, we can even tell React to pass specific data to certain child components. This is made possible with parameters.
Modify components/withCounter.js
to accept an increaseCount
parameter:
//This function will now accept an \'increaseCount\' parameter.\\nconst UpdatedComponent = (OriginalComponent, increaseCount) => { \\nfunction NewComponent(props) {\\nreturn (\\n//this time, increment the \'size\' variable by \'increaseCount\'\\nincrementCounter={() => setCounter((size) => size + increaseCount)}\\n/>\\n);\\n//further code..\\n\\n
In this piece of code, we informed React that our function will now take in an additional parameter called increaseCount
.
Modify the HoverIncrease
and ClickIncrease
components to use this parameter:
//In HoverIncrease, change the \'export\' statement:\\nexport default withCounter(HoverIncrease, 10); //value of increaseCount is 10.\\n//this will increment the \'counter\' Hook by 10.\\n//In ClickIncrease:\\nexport default withCounter(ClickIncrease, 3); //value of increaseCount is 3.\\n//will increment the \'counter\' state by 3 steps.\\n\\n
By passing a custom value (increaseCount
) to the HOC, we can dynamically control the increment behavior in each wrapped component.
Here is the expected result:
\\nIn the end, the withCounter.js
file should look like this:
import React from \\"react\\";\\nimport { useState } from \\"react\\";\\nconst UpdatedComponent = (OriginalComponent, increaseCount) => {\\nfunction NewComponent(props) {\\nconst [counter, setCounter] = useState(10);\\nreturn (\\nname=\\"LogRocket\\"\\ncounter={counter}\\nincrementCounter={() => setCounter((size) => size + increaseCount)}\\n/>\\n);\\n}\\nreturn NewComponent;\\n};\\nexport default UpdatedComponent;\\n\\n
HoverIncrease.js
should look like this:
import { useState } from \\"react\\";\\nimport withCounter from \\"./withCounter\\";\\nfunction HoverIncrease(props) {\\nconst [fontSize, setFontSize] = useState(10);\\nconst { counter, incrementCounter } = props;\\nreturn (\\nsetFontSize((size) => size + 1)}>\\nIncrease on hover\\nSize of font in onMouseOver function: {fontSize}\\nValue of \'name\' in HoverIncrease: {props.name}\\nincrementCounter()}>Increment counter\\nValue of \'counter\' in HoverIncrease: {counter}\\n);\\n}\\nexport default withCounter(HoverIncrease, 10);\\n\\n
And finally, your ClickIncrease
component should have the following code:
import { useEffect, useState } from \\"react\\";\\nimport withCounter from \\"./withCounter\\";\\nfunction ClickIncrease(props) {\\nconst { counter, incrementCounter } = props;\\nconst [fontSize, setFontSize] = useState(10);\\nreturn (\\nsetFontSize((size) => size + 1)}>\\nIncrease with click\\nSize of font in onClick function: {fontSize}\\nValue of \'name\' in ClickIncrease: {props.name}\\nincrementCounter()}>Increment counter\\nValue of \'counter\' in ClickIncrease: {counter}\\n);\\n}\\nexport default withCounter(ClickIncrease, 3);\\n\\n
Choosing between higher-order components (HOCs) and Hooks depends on two key factors: component transformation and code organization.
\\nHOCs and Hooks can complement each other to create robust solutions. Below is a real-world authentication example:
\\n// Authentication HOC\\nconst withAuth = (WrappedComponent, requiredRole) => {\\n return function AuthWrapper(props) {\\n const { isAuthenticated, userRole } = useAuth(); // Custom hook for auth state\\n const navigate = useNavigate();\\n\\n useEffect(() => {\\n if (!isAuthenticated) {\\n navigate(\'/login\');\\n } else if (requiredRole && userRole !== requiredRole) {\\n navigate(\'/unauthorized\');\\n }\\n }, [isAuthenticated, userRole, navigate]);\\n\\n if (!isAuthenticated) {\\n return null; // Optionally return a loader while determining authentication\\n }\\n\\n return <WrappedComponent {...props} />;\\n };\\n};\\n\\n// Usage with a protected component\\nconst AdminDashboard = ({ data }) => {\\n return <div>Admin Dashboard Content</div>;\\n};\\n\\nexport default withAuth(AdminDashboard, \'admin\');\\n\\n
Here’s another example demonstrating performance optimization using Hooks within an HOC:
\\n// Performance optimization HOC using hooks\\nconst withDataFetching = (WrappedComponent, fetchConfig) => {\\n return function DataFetchingWrapper(props) {\\n const [data, setData] = useState(null);\\n const [error, setError] = useState(null);\\n const [loading, setLoading] = useState(true);\\n\\n const { cache } = useCacheContext();\\n const { notify } = useNotification();\\n\\n useEffect(() => {\\n const fetchData = async () => {\\n try {\\n const cachedData = cache.get(fetchConfig.key);\\n if (cachedData) {\\n setData(cachedData);\\n setLoading(false);\\n return;\\n }\\n\\n const response = await fetch(fetchConfig.url);\\n const result = await response.json();\\n\\n cache.set(fetchConfig.key, result);\\n setData(result);\\n } catch (err) {\\n setError(err);\\n notify({\\n type: \'error\',\\n message: \'Failed to fetch data\',\\n });\\n } finally {\\n setLoading(false);\\n }\\n };\\n\\n fetchData();\\n }, [fetchConfig.url, fetchConfig.key]);\\n\\n return <WrappedComponent {...props} data={data} loading={loading} error={error} />;\\n };\\n};\\n\\n
For a broader perspective on advanced React logic reuse, see “The modern guide to React state patterns.”
\\nIf your HOC involves expensive computations, consider performance optimization techniques like memoization to prevent unnecessary re-renders. Below is an example using useMemo
and React.memo
:
// Assume expensiveDataProcessing is an expensive function that processes props.data\\n\\nconst expensiveDataProcessing = (data) => { \\n // ...expensive computations... \\n return data; // Replace with the actual processed result \\n};\\n\\nconst withOptimizedData = (WrappedComponent) => { \\n function OptimizedDataWrapper(props) { \\n const memoizedProps = useMemo(() => ({ \\n ...props, \\n processedData: expensiveDataProcessing(props.data), \\n }), [props.data]); \\n return <WrappedComponent {...memoizedProps} />; \\n }\\n return React.memo(OptimizedDataWrapper); \\n}; \\nexport default withOptimizedData;\\n\\n
When enhancing a base component with several cross-cutting concerns (such as authentication, data fetching, error handling, and analytics), you can compose multiple HOCs into one.
\\nTo compose multiple HOCs directly:
\\nconst composedComponent = withAuth(withData(withLogging(BaseComponent)));\\n\\n
Alternatively, use a compose
utility to combine multiple functions from right to left:
// Utility\\nconst compose = (...functions) => x =>\\n functions.reduceRight((acc, fn) => fn(acc), x);\\n\\n// Usage\\nconst composedComponent = compose(withAuth, withData, withLogging)(BaseComponent);\\n\\n
// These will behave differently:\\nconst enhance1 = compose(withAuth, withDataFetching);\\nconst enhance2 = compose(withDataFetching, withAuth);\\n\\n
// Props flow through each HOC in the chain\\nconst withProps = compose(\\n withAuth, // Adds isAuthenticated\\n withDataFetching // Adds data, loading\\n);\\n// Final component receives: { isAuthenticated, data, loading, ...originalProps }\\n\\n
Avoid excessive composition:
\\nconst tooManyHOCs = compose(\\n withAuth,\\n withData,\\n withLogging,\\n withTheme,\\n withTranslation,\\n withRouter,\\n withRedux\\n);\\n// Each layer adds complexity and potential performance impact\\n\\n
A better approach is to combine related concerns:
\\nconst withDataFeatures = compose(\\n withData,\\n withLoading,\\n withError\\n);\\n\\nconst withAppFeatures = compose(\\n withAuth,\\n withAnalytics\\n);\\n\\n
const withDebug = (WrappedComponent) => {\\n return function DebugWrapper(props) {\\n console.log(\'Component:\', WrappedComponent.name);\\n console.log(\'Props:\', props);\\n return <WrappedComponent {...props} />;\\n };\\n};\\n\\nconst enhance = compose(\\n withDebug, // Add at different positions to debug specific layers\\n withAuth,\\n withDebug,\\n withDataFetching\\n);\\n\\n
const withDataProtection = compose(\\n withAuth,\\n withErrorBoundary,\\n withLoading\\n);\\n\\nconst withAnalytics = compose(\\n withTracking,\\n withMetrics,\\n withLogging\\n);\\n\\n// Use them together or separately\\nconst EnhancedComponent = compose(\\n withDataProtection,\\n withAnalytics\\n)(BaseComponent);\\n\\n
Ensuring type safety for HOCs improves maintainability. Below is an example of a type-safe HOC in TypeScript:
\\nimport React, { useState, useEffect } from \'react\';\\n\\ninterface WithDataProps<T> {\\n data: T | null;\\n loading: boolean;\\n error: Error | null;\\n}\\n\\ninterface FetchConfig {\\n url: string;\\n}\\n\\nfunction withData<T, P extends object>(\\n WrappedComponent: React.ComponentType<P & WithDataProps<T>>,\\n fetchConfig: FetchConfig\\n): React.FC<P> {\\n return function WithDataComponent(props: P) {\\n const [data, setData] = useState<T | null>(null);\\n const [loading, setLoading] = useState<boolean>(true);\\n const [error, setError] = useState<Error | null>(null);\\n\\n useEffect(() => {\\n fetch(fetchConfig.url)\\n .then((response) => response.json())\\n .then((result: T) => {\\n setData(result);\\n setLoading(false);\\n })\\n .catch((err: Error) => {\\n setError(err);\\n setLoading(false);\\n });\\n }, [fetchConfig.url]);\\n\\n return (\\n <WrappedComponent {...props} data={data} loading={loading} error={error} />\\n );\\n };\\n}\\n\\nexport default withData;\\n\\n
One important thing to note is that the process of passing down props to an HOC’s child component is different than that of a non-HOC component.
\\nFor example, look at the following code:
\\nfunction App() {\\nreturn (\\n{/*Pass in a \'secretWord\' prop*/}\\n\\n);\\n}\\nfunction HoverIncrease(props) {\\n//read prop value:\\nconsole.log(\\"Value of secretWord: \\" + props.secretWord);\\n//further code..\\n}\\n\\n
In theory, we should get the message Value of secretWord: pineapple
in the console. However, that’s not the case here:
In this case, the secretWord
prop is actually being passed to the withCounter
function and not to the HoverIncrease
component.
To solve this issue, we have to make a simple change to withCounter.js
:
const UpdatedComponent = (OriginalComponent, increaseCount) => {\\nfunction NewComponent(props) {\\nreturn (\\n//Pass down all incoming props to the HOC\'s children:\\n{...props}\\n/>\\n);\\n}\\nreturn NewComponent;\\n};\\n\\n
This minor fix solves our problem:
\\nThis article covered the fundamentals of React’s higher-order components, including best practices, performance optimizations, debugging strategies, and type safety. Experimenting with the provided code samples will help solidify your understanding. Happy coding!
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\noutExtension
\\n tsup.config.ts
file\\n outExtension
\\n tsup is a fast and efficient, zero-configuration TypeScript bundler designed to streamline the process of compiling, optimizing, and outputting different module formats. Unlike older bundlers, tsup leverages esbuild under the hood for high-speed performance, supports modern ECMAScript modules and CommonJS(CJS), and provides built-in features like tree shaking, minification, and code splitting.
\\nThis guide walks through setting up tsup, configuring the output, and using the outExtension
option to customize file extensions.
tsup is a modern, fast, and zero-configuration bundler for TypeScript and JavaScript projects. It simplifies the process of bundling libraries or applications written in TypeScript or JavaScript, making it easier to produce optimized and production-ready code. tsup uses esbuild under the hood for rapid build times.
\\ntsup is primarily used to bundle TypeScript and JavaScript projects into distributable formats. It automatically handles TypeScript compilation, tree shaking, and bundling without requiring complex configuration. It supports multiple output formats like ESM, CJS, and IIFE, making it versatile for various environments.
\\nIt’s ideal for building libraries, applications, or any project that needs to be packaged for deployment. tsup optimizes code by removing unused sections (tree shaking) and minifying output for production. It has native TypeScript support, allowing you to bundle your code directly without precompilation. Additionally, tsup can generate development builds with source maps for debugging and production-ready builds with minification.
\\ntsup minimizes setup complexity by offering a zero-configuration approach, allowing developers to bundle TypeScript and JavaScript projects without extensive configuration files. Unlike traditional bundlers that require complex setups with multiple plugins and custom build scripts, tsup works out of the box by automatically detecting entry points, handling TypeScript compilation, and optimizing output formats. This streamlined workflow significantly reduces the time spent configuring a build system, making it easier to focus on development rather than setup.
\\nIf you’re considering alternative bundlers, check out our article “Using Rollup to package a library for TypeScript and JavaScript” for a detailed comparison.
\\nBefore bundling with tsup, start by creating a new TypeScript package. Initialize a project directory and set up TypeScript:
\\nmkdir my-ts-package && cd my-ts-package\\nnpm init -y\\nnpm install typescript --save-dev\\nnpx tsc --init\\n\\n
This initializes a TypeScript package with a default tsconfig.json
.
To integrate tsup
into a TypeScript project, install it via npm:
npm install tsup --save-dev\\n\\n
Then, update the package.json
file to add a build script:
{\\n \\"scripts\\": {\\n \\"build\\": \\"tsup\\"\\n }\\n}\\n\\n
By default, tsup looks for an index.ts
or src/index.ts
entry point. To specify an entry file manually, pass it as an argument. For example, if you have a main.ts
file inside src/
, you can define a simple function:
export function greet() {\\n return \\"Hello from tsup!\\";\\n}\\n\\n
Run the following tsup
command:
npx tsup src/main.ts --format esm,cjs --dts\\n\\n
This command instructs tsup to generate both ESM and CJS outputs and includes TypeScript declaration files (.d.ts
). These files are essential for TypeScript libraries because they provide type definitions that enable editors and compilers to understand the package’s API without needing access to the original TypeScript source. Manually generating these files with tsc
can be cumbersome, requiring additional configurations, but tsup simplifies this by handling it automatically with the --dts
flag.
For those exploring other modern bundling options, our post “Migrating a TypeScript app from Node.js to Bun” offers valuable insights on an emerging alternative.
\\noutExtension
By default, tsup outputs .js
files for both ESM and CJS formats. However, certain environments and packaging requirements may require different extensions. The outExtension
option allows renaming output files.
In a tsup.config.ts
file, define:
import { defineConfig } from \'tsup\';\\n\\nexport default defineConfig({\\n entry: [\'src/index.ts\'],\\n format: [\'esm\', \'cjs\'],\\n dts: true,\\n outExtension({ format }) {\\n return format === \'esm\' ? { js: \'.mjs\' } : { js: \'.cjs\' };\\n },\\n});\\n\\n
This configuration ensures that ESM outputs use .mjs
, while CJS outputs use .cjs
, making module resolution more explicit in Node.js environments.
Supporting both ESM and CJS in a TypeScript package is crucial for module format compatibility across different environments. ESM (ECMAScript Modules) is the modern standard, optimized for tree shaking and better performance in bundlers like Webpack and Vite. CJS, on the other hand, is still widely used in Node.js projects and older toolchains. By generating both formats, the package is flexible, allowing users to consume it regardless of their module system.
\\ntsup simplifies this dual support by allowing both formats to be defined in a single command:
\\nnpx tsup src/index.ts --format esm,cjs --dts\\n\\n
This approach ensures that both modern and legacy projects can import the package without issues.
\\nFor more on the differences between ESM and CommonJS — and why these distinctions matter — see our guide on CommonJS vs. ES modules in Node.js.
\\ntsup
plays a crucial role in efficiently bundling TypeScript code. The configuration in Mappersmith’s tsup.config.ts
provides an excellent example of setting up bundling for different environments, target versions, and output formats. It showcases how to define entry points, handle different build scenarios like Node.js and browser environments, and manage sourcemaps, type declarations, and minification.
The package.json
script in Mappersmith integrates tsup
as part of a larger build process. It begins by copying version files, running tsup
to bundle the code, and finally generating type declarations. This modular approach keeps the workflow clean and focused on different aspects of the build process. The build script ties together multiple tasks, demonstrating how tsup
fits into a broader toolchain.
tsup.config.ts
fileFor Mappersmith’s tsup
configuration, the following setup is used:
import { defineConfig, Options } from \'tsup\'\\nimport { esbuildPluginFilePathExtensions } from \'esbuild-plugin-file-path-extensions\'\\n\\n// Inspired by https://github.com/immerjs/immer/pull/1032/files\\nexport default defineConfig((options) => {\\n const commonOptions: Partial = {\\n entry: [\'src/**/*.[jt]s\', \'!./src/**/*.d.ts\', \'!./src/**/*.spec.[jt]s\'],\\n platform: \'node\',\\n target: \'node16\',\\n // `splitting` should be false, it ensures we are not getting any `chunk-*` files in the output.\\n splitting: false,\\n // `bundle` should be false, it ensures we are not getting the entire bundle in EVERY file of the output.\\n bundle: false,\\n // `sourcemap` should be true, we want to be able to point users back to the original source.\\n sourcemap: true,\\n clean: true,\\n ...options,\\n }\\n const productionOptions = {\\n minify: true,\\n define: {\\n \'process.env.NODE_ENV\': JSON.stringify(\'production\'),\\n },\\n }\\n\\n return [\\n // ESM, standard bundler dev, embedded `process` references.\\n // (this is consumed by [\\"exports\\" > \\".\\" > \\"import\\"] and [\\"exports > \\".\\" > \\"types\\"] in package.json)\\n {\\n ...commonOptions,\\n format: [\'esm\'],\\n clean: true,\\n outDir: \'./dist/esm/\',\\n esbuildPlugins: [esbuildPluginFilePathExtensions({ filter: /^\\\\./ })],\\n // Yes, bundle: true => https://github.com/favware/esbuild-plugin-file-path-extensions?tab=readme-ov-file#usage\\n bundle: true,\\n dts: {\\n compilerOptions: {\\n resolveJsonModule: false,\\n outDir: \'./dist\',\\n },\\n },\\n },\\n // ESM for use in browsers. Minified, with `process` compiled away\\n {\\n ...commonOptions,\\n ...productionOptions,\\n // `splitting` should be true (revert to the default)\\n splitting: true,\\n // `bundle` should be true, so we get everything in one file.\\n bundle: true,\\n entry: {\\n \'mappersmith.production.min\': \'src/index.ts\',\\n },\\n platform: \'browser\',\\n format: [\'esm\'],\\n outDir: \'./dist/browser/\',\\n },\\n // CJS\\n {\\n ...commonOptions,\\n clean: true,\\n format: [\'cjs\'],\\n outDir: \'./dist/\',\\n },\\n ]\\n})\\n\\n
In the above setup:
\\ncommonOptions
contains settings that apply to all builds, such as defining entry files, targeting Node.js version 16, disabling code splitting, and enabling sourcemaps.productionOptions
applies specifically to production builds, enabling minification and defining the NODE_ENV
variable as production
.esbuildPluginFilePathExtensions
plugin and handling TypeScript declarations.outExtension
When generating production builds, it is often useful to append .min.js
to minified files for better clarity and organization. The outExtension
option in tsup allows you to modify output file extensions dynamically. Update your configuration as follows:
import { defineConfig } from \'tsup\';\\n\\nexport default defineConfig((options) => ({\\n entry: [\'src/index.ts\'],\\n format: [\'esm\', \'cjs\'],\\n dts: true,\\n minify: true,\\n outExtension({ format }) {\\n return format === \'esm\' ? { js: \'.min.mjs\' } : { js: \'.min.cjs\' };\\n },\\n}));\\n\\n
This setup ensures:
\\n*.min.mjs
*.min.cjs
This improves clarity when distributing both development and production builds. Explicitly defining file extensions prevents ambiguity in module resolution, particularly in environments requiring strict format handling.
\\ntsup supports multiple entry points, making it ideal for bundling libraries with several exports. To configure multiple entry points, update your tsup.config.ts
as follows:
import { defineConfig } from \'tsup\';\\n\\nexport default defineConfig({\\n entry: {\\n index: \'src/index.ts\',\\n utils: \'src/utils.ts\',\\n },\\n format: [\'esm\', \'cjs\'],\\n dts: true,\\n splitting: true,\\n sourcemap: true,\\n clean: true,\\n});\\n\\n
This configuration compiles src/index.ts
and src/utils.ts
separately, enabling better modularity and maintainability in larger projects.
If your project includes static assets (such as CSS or JSON files), tsup allows you to exclude external dependencies to keep the final bundle lightweight. Use the external
option to specify dependencies that should not be bundled:
import { defineConfig } from \'tsup\';\\n\\nexport default defineConfig({\\n entry: [\'src/index.ts\'],\\n format: [\'esm\', \'cjs\'],\\n dts: true,\\n external: [\'react\', \'lodash\'],\\n});\\n\\n
This ensures that dependencies like react
and lodash
are referenced externally rather than bundled within the output, reducing the final file size and improving efficiency.
By leveraging these configurations, tsup provides a streamlined approach to managing multiple entry points, minified outputs, and external dependencies, making it a powerful tool for modern TypeScript project bundling.
\\n\\nWhen using tsup
to bundle your TypeScript package, following best practices ensures a smooth and efficient workflow while avoiding common pitfalls. Below are key strategies to use tsup
effectively:
To ensure your package works across different environments, always specify both ESM (ECMAScript Modules) and CJS (CommonJS) formats. Modern frameworks and tools often prefer ESM, while older systems or Node.js environments may still rely on CJS. Set the format
option in your tsup
configuration:
{\\n \\"format\\": [\\"esm\\", \\"cjs\\"]\\n}\\n\\n
Failing to support both formats can limit your package’s usability, so this step is essential.
\\n.d.ts
)TypeScript declaration files provide type information for users of your package. Without them, users lose type safety and IntelliSense support. Enable dts
in your tsup
configuration to generate these files:
{\\n \\"dts\\": true\\n}\\n\\n
Skipping this step can hinder the developer experience for TypeScript users.
\\noutExtension
for Node.js compatibilityNode.js has strict rules for resolving module files, requiring .mjs
for ESM and .cjs
for CJS. To avoid runtime errors, define the outExtension
option in your tsup
configuration:
{\\n \\"outExtension\\": ({ format }) => ({\\n \\".js\\": format === \\"cjs\\" ? \\".cjs\\" : \\".mjs\\"\\n })\\n}\\n\\n
This ensures Node.js correctly resolves your module files, preventing import issues.
\\nHardcoding paths in your tsup
configuration reduces flexibility and makes setup less reusable. Instead, use dynamic options to adapt to different formats or environments. Set the outDir
dynamically:
{\\n \\"outDir\\": ({ format }) => `dist/${format}`\\n}\\n\\n
This approach keeps your configuration flexible and easier to maintain.
\\nMinification reduces bundle size but makes debugging difficult. Enable source maps when minifying to simplify debugging:
\\n{\\n \\"minify\\": true,\\n \\"sourcemap\\": true\\n}\\n\\n
Without source maps, debugging minified code can be nearly impossible.
\\nBy default, tsup treats certain dependencies as external and does not include them in the bundle. If you find missing dependencies in the final output, configure the external
option:
{\\n \\"external\\": [\\"react\\", \\"lodash\\"]\\n}\\n\\n
This ensures dependencies like react
and lodash
are referenced externally rather than bundled, reducing file size.
While tsup
is optimized for speed, enabling certain features like source maps can impact build times. Monitor your build performance and adjust configurations as needed to balance speed and debugging capabilities.
tsup
is a powerful bundler that simplifies the process of bundling TypeScript projects. Its support for ESM and CJS formats, along with features like outExtension
for customized file extensions, makes it an essential tool for modern JavaScript development. By following these best practices, you can effectively integrate tsup
into your workflow, ensuring efficient and production-ready builds. Whether you are focusing on tree shaking, module format compatibility, or streamlined builds, tsup
provides the necessary tools for success.
Hey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nOne feature that makes Telegram stand out from other messaging apps is how easy it is to build bots in it. Telegram bots are lightweight, programmable applications that run within the Telegram app. They use the Telegram interface to accept commands and display results, allowing users to seamlessly interact with them.
\\nTelegram bots don’t only run inside the app; they use the Telegram Bot API to perform tasks like messaging a user, joining groups or channels, and more. Bots can do most things a human user of the app can do — with the help of the API. And because bots are computer programs, they can be written in any programming language, making them highly flexible and adaptable.
\\nBecause they’re programmable, Telegram bots can automate tasks, perform logical operations, and offer custom interaction interfaces that aren’t available to regular users. In Telegram, bots are easily distinguishable from human users.
\\nThis article starts by exploring the many use cases of Telegram bots. It then walks you through a tutorial on building a Telegram bot using TypeScript and Node.js.
\\nHere are some key benefits Telegram users gain from bots:
\\nTelegram bots have a wide range of uses, including serving as an alternative to mobile apps. Since they function similarly to apps, they can even run games. Developers can also use bots to quickly prototype CRUD applications
\\nThe rest of the article will focus on how to build a custom telegram bot in Node.js. To follow along, you need knowledge of Node.js APIs and TypeScript. Make sure to have Node.js v20 or above installed.
\\nThe following tutorial is an implementation of a Telegram bot any Telegram user can chat with about anything. The chatbot will be able to respond to text, photo, and voice messages. Our project will be implemented using the following tools:
\\ngemini-1.5-flash
AI model to generate the responses sent to users. It also uses that model to respond to voice notes and imagesThe final source code of the project can be found in this GitHub repository.
\\nThe first step to creating a new Telegram Bot is to message BotFather in the app. Open the Telegram App and give BotFather the /start
command. BotFather will then provide a comprehensive menu of all the services it offers. Follow the menu to create a new bot.
For this example, the bot’s name is Gemini AI Bot
with the username gemini01_bot
. You’ll have to create a unique username for your use case.
Finally, BotFather will generate a bot token for you. This token is a unique authentication token for your new bot. Anyone with access to it can make changes to the bot, so be sure to copy it and store it somewhere safe — you’ll need it soon.
\\nTo get started with grammY in Node.js, first create the project folder. This tutorial will name the project telegram-bot
:
mkdir telegram-bot\\n\\n
Then, navigate into the project folder and initialize npm in the command line:
\\ncd telegram-bot\\nnpm init -y\\n\\n
Next, install grammY, TypeScript, and Node.js type definitions (@types/node) from npm:
\\nnpm install grammy \\n# Install grammY -- the bot library -- as a dependency\\nnpm install --save-dev typescript @types/node\\n# This is necessary for developing TypeScript applications in Node.js\\n\\n
Initialize TypeScript:
\\nnpx tsc --init\\n\\n
Inside the newly created tsconfig.json
file, set the configuration below. This makes sure the project can use ES modules:
{\\n \\"compilerOptions\\": {\\n ...\\n \\"target\\": \\"es2017\\",\\n \\"module\\": \\"nodenext\\",\\n ... \\n }\\n}\\n\\n
Set up the following file structure for the project:
\\n├── bot.ts\\n├── .env\\n├── .gitignore\\n├── node_modules\\n├── package.json\\n├── package-lock.json\\n└── tsconfig.json\\n\\n
Inside the .gitignore
file, exclude the following from git commits:
node_modules/\\n.env\\nbot.js\\n\\n
Finally, open the .env
file and bind the Telegram Bot token to a constant:
TELEGRAM_BOT_TOKEN=xxxxx\\n\\n
With that done, the folder is now set up for the project.
\\nTo get started with Google Gemini, you need to first create an API key. You can do so in the Google AI Studio.
\\nAfter obtaining the API key, open the .env
file and bind the API key to a constant:
...\\nGEMINI_API_KEY=xxxxx\\n\\n
After that, install the Google AI JavaScript SDK:
\\nnpm install @google/generative-ai \\n\\n
With the library installed, open the bot.ts
file and use the constants in the .env
file to configure both Gemini and grammY:
// bot.ts\\n\\nimport { Bot } from \'grammy\';\\nimport { GoogleGenerativeAI, type Part } from \'@google/generative-ai\';\\nimport type { User, File } from \'grammy/types\';\\n\\nconst BOT_API_SERVER = \'https://api.telegram.org\';\\nconst { TELEGRAM_BOT_TOKEN, GEMINI_API_KEY } = process.env;\\nif (!TELEGRAM_BOT_TOKEN || !GEMINI_API_KEY) {\\n throw new Error(\'TELEGRAM_BOT_TOKEN and GEMINI_API_KEY must be provided!\');\\n}\\n\\nconst bot = new Bot(TELEGRAM_BOT_TOKEN);\\nconst genAI = new GoogleGenerativeAI(GEMINI_API_KEY);\\nconst model = genAI.getGenerativeModel({\\n model: \'gemini-1.5-flash\',\\n systemInstruction:\\n \'You are a Telegram Chatbot. Maintain a friendly tone. Keep responses one paragraph short unless told otherwise. You have the ability to respond to audio and pictures.\',\\n});\\nconst chat = model.startChat();\\n\\n
As you can see from the bot.ts
file, it also imported some types from grammy/types
, which will be important later. Also, observe the systemInstruction
given in the Gemini Configuration. This lets you define consistent behavior for the responses to user queries in the chatbot.
To get your first response using the Gemini API with the chatbot, set up a response to the bot with the /start
command. It is the first command any user will give a bot:
// bot.ts\\n...\\n\\nbot.command(\'start\', async (ctx) => {\\n const user: User | undefined = ctx.from;\\n const fullName: string = `${user?.first_name} ${user?.last_name}`;\\n const prompt: string = `Welcome user with the fullname ${fullName} in one sentence.`;\\n const result = await chat.sendMessage(prompt);\\n return ctx.reply(result.response.text(), { parse_mode: \'Markdown\' });\\n});\\n\\nbot.start();\\n\\n
To make type checking and running the bot easier, set up your package.json
with the following scripts:
// package.json\\n{\\n...\\n \\"type\\": \\"module\\",\\n \\"scripts\\": {\\n \\"start\\": \\"node --env-file=.env bot.js\\",\\n \\"watch\\": \\"tsc -w\\",\\n \\"dev\\": \\"node --env-file=.env --watch bot.js\\"\\n },\\n...\\n}\\n\\n
Finally, use the following CLI command to start the application:
\\nnpm run watch & npm run dev\\n\\n
The command type checks the bot.ts
file on any changes, compiles the file, and then runs the resulting bot.js
file. After running the command, test your telegram bot using the URL https://t.me/<bot username>. Start the bot with the /start
command and watch the bot respond. Make sure to keep the bot server running on your local host device:
As stated earlier, this bot will respond to texts using the Google Gemini API. We’ll learn how to implement that in this section.
\\nIn the bot.ts
file, add the following:
// bot.ts\\n...\\n\\nbot.on(\'message:text\', async (ctx) => {\\n const prompt: string = ctx.message.text;\\n const result = await chat.sendMessage(prompt);\\n return ctx.reply(result.response.text(), { parse_mode: \'Markdown\' });\\n});\\n\\nbot.start()\\n\\n
Here, grammY listens for a text message sent to the bot. After that, grammY sends that text message to Gemini and forwards Gemini’s response as a reply to the user. Here is the result:
\\nAfter receiving an audio file, Google Gemini can transcribe it and give a response. This project will prompt Gemini to reply to the transcript of the audio file. A user can send audio using Telegram’s built-in voice message feature:
\\n// bot.ts\\n\\n...\\nbot.on(\'message:voice\', async (ctx) => {\\n const file: File = await ctx.getFile();\\n const filePath: string | undefined = file.file_path;\\n if (!filePath) return;\\n\\n const fileURL: string = `${BOT_API_SERVER}/file/bot${TELEGRAM_BOT_TOKEN}/${filePath}`;\\n const fetchedResponse = await fetch(fileURL);\\n const data: ArrayBuffer = await fetchedResponse.arrayBuffer();\\n const base64Audio: string = Buffer.from(data).toString(\'base64\');\\n\\n const prompt: Array<string | Part> = [\\n {\\n inlineData: {\\n mimeType: \'audio/ogg\',\\n data: base64Audio,\\n },\\n },\\n {\\n text: \'Please respond to the audio prompt.\',\\n },\\n ];\\n const result = await chat.sendMessage(prompt);\\n return ctx.reply(result.response.text(), { parse_mode: \'Markdown\' });\\n});\\n\\nbot.start();\\n\\n
Now the bot should be able to reply to voice messages:
\\nSimilar to audio files, Gemini can also interpret images. Here, the project will use grammY to place a listener for sent images. Then it will prompt Gemini to either use the photo caption as a prompt or describe what is in the photo (if it does not have a caption):
\\n// bot.ts\\n\\n...\\ntype MINE = \'image/jpeg\' | \'image/png\';\\nconst ExtToMINE: Record<string, MINE> = {\\n jpeg: \'image/jpeg\',\\n jpg: \'image/jpeg\',\\n png: \'image/png\',\\n};\\n\\nbot.on(\'message:photo\', async (ctx) => {\\n const caption: string | undefined = ctx.message.caption;\\n const photoFile: File = await ctx.getFile();\\n const photoFilePath: string | undefined = photoFile.file_path;\\n if (!photoFilePath) return;\\n\\n const photoURL: string = `${BOT_API_SERVER}/file/bot${TELEGRAM_BOT_TOKEN}/${photoFilePath}`;\\n const fetchedResponse = await fetch(photoURL);\\n\\n const data: ArrayBuffer = await fetchedResponse.arrayBuffer();\\n const base64Photo: string = Buffer.from(data).toString(\'base64\');\\n let match: RegExpMatchArray | null = photoFilePath.match(/[^.]+$/);\\n if (!match) return;\\n\\n let photoExt: string = match[0];\\n const prompt: Array<string | Part> = [\\n { inlineData: { mimeType: ExtToMINE[photoExt], data: base64Photo } },\\n { text: caption ?? \'Describe what you see in the photo\' },\\n ];\\n\\n const result = await chat.sendMessage(prompt);\\n return ctx.reply(result.response.text(), { parse_mode: \'Markdown\' });\\n});\\n\\nbot.start()\\n\\n
Now, the Telegram chatbot can respond to images as well:
\\nOne great benefit grammY offers is the ease of handling errors. With the library, a developer can use the bot.catch()
method to catch and handle whatever errors a bot application encounters. Below is a simple error-handling script in grammY:
// bot.ts\\n\\n...\\nbot.catch((error) => {\\n const ctx = error.ctx;\\n console.log(error);\\n return ctx.reply(\'Something went wrong. Try again!\');\\n});\\n\\nbot.start()\\n\\n
The snippet above logs the error to the command line and then replies to a bot user with the message “Something went wrong. Try again.”
\\nFor a Telegram bot to keep running at all times, it needs to be deployed to a host that is always up and active. The developer’s local host is most likely unreliable as it could go off at any time.
\\nThe grammY documentation offers several guides for deploying a bot to different platforms. Fundamentally, a grammY application is a lightweight backend server (in Node.js or Deno), which means you can easily deploy it as you would deploy any other server. After deploying it, anybody can now interact with your Telegram bot at any point in time.
\\nThis article introduced Telegram bots and walked through the process of building one with Node.js. It began by exploring various use cases for bots, then covered how to obtain a bot token, set up a grammY project in Node.js, and get a Gemini API key. The tutorial then demonstrated how to use grammY and Google Gemini to respond to text, audio, and images.
\\nAs we’ve seen, Telegram bots can solve a wide range of problems, offering powerful automation and interaction capabilities. Use this guide as a starting point to experiment and build your own custom Telegram bot. You can find the complete source code for this project here.
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nbox-shadow
vs. drop-shadow()
\\n box-shadow
\\n box-shadow
use cases\\n box-shadow
examples\\n box-shadow
with the View Transitions API\\n box-shadow
with native CSS nesting\\n box-shadow
in @layer
blocks\\n box-shadow
generator\\n box-shadow
best practices\\n The box-shadow
CSS property allows you to add shadows to elements, giving you control over their size, blur, spread, and color. This feature enhances depth and visual emphasis, making it a popular choice for styling buttons, cards, and other UI components to improve both aesthetics and usability.
In this article, we’ll take a deep dive into the box-shadow
property. We’ll start with a detailed breakdown of its syntax before exploring advanced techniques such as layered, neon, and neumorphic shadows. Additionally, we’ll provide a browser compatibility table, practical examples, and an interactive box-shadow
generator to help you apply these styles effectively.
Update history:
\\nbox-shadow
property, including the View Transitions API, native CSS nesting, and the CSS @layer
rulebox-shadow: <x-offset> <y-offset> <blur-radius> <spread-radius> <shadow-color>;\\n\\n
Property | \\nDescription | \\n
---|---|
x-offset | \\nSets the horizontal position of the shadow. Positive moves right, negative moves left. | \\n
y-offset | \\nSets the vertical position of the shadow. Positive moves down, negative moves up. | \\n
blur-radius | \\nDefines the softness of the shadow. Higher values create more diffuse shadows. | \\n
spread-radius | \\nDetermines shadow size. Positive values expand, negative values shrink. | \\n
color | \\nSpecifies the shadow color in HEX, rgba() , or hsla() . | \\n
.card-1 { \\n box-shadow: 0px 5px 10px 0px rgba(0, 0, 0, 0.5);\\n}\\n\\n.card-2 {\\n box-shadow: 0px -5px 10px 0px rgba(0, 0, 0, 0.5);\\n}\\n\\n
We’ll frequently use rgba()
colors due to their alpha value, which allows for opacity control — an essential factor in creating realistic shadows. In well-lit environments, shadows aren’t purely black; instead, they adopt subtle hues influenced by surrounding light.
When styling with the box-shadow
property, transparent shadows work best, as they blend seamlessly with multicolored backgrounds. Pay attention to how real-world shadows interact with their light sources — this observation will help you craft more natural-looking effects in CSS:
The area closest to the object has the darkest shadows, then it spreads and blurs outward gradually. Opaque or completely black shadows would be distracting, ugly, and imply a complete blockage of light, which isn’t what we’re after.
\\nbox-shadow
vs. drop-shadow()
The drop-shadow()
function applies shadows to images while respecting transparency, whereas box-shadow
applies to the entire bounding box of an element. Here’s a visual comparison:
Example code:
\\n.box-shadow {\\n box-shadow: 5px 5px 5px 0px rgba(0,0,0,0.3);\\n}\\n\\n.shadow-filter {\\n filter: drop-shadow(5px 5px 5px rgba(0,0,0,0.3));\\n}\\n\\n
box-shadow
Use box-shadow
for UI elements like cards and buttons, and drop-shadow()
for images or elements with transparency, ensuring the shadow follows their shape.
The following table outlines the key differences between box-shadow
and drop-shadow()
:
ƒ
\\nFeature | \\nbox-shadow | \\ndrop-shadow() | \\n
---|---|---|
Works on | \\nAny block-level element | \\nImages and elements with transparency | \\n
Follows transparency? | \\n❌ No, applies to the entire box | \\n✅ Yes, follows the element’s shape | \\n
Customizable blur & spread? | \\n✅ Yes | \\n❌ No | \\n
Best for | \\nUI elements like buttons and cards | \\nPNGs, SVGs, and transparent images | \\n
While drop-shadow()
is ideal for non-rectangular images, box-shadow
provides more flexibility for UI elements. If you need shadows that adapt to transparent areas, drop-shadow()
is the better option.
box-shadow
First, create a simple box container with HTML:
\\n<div class=\\"box\\">\\n ...\\n</div>\\n\\n
Next, apply the box-shadow
property in CSS:
.box {\\n height: 150px;\\n width: 150px;\\n background: #fff;\\n border-radius: 20px;\\n box-shadow: 0px 5px 10px 0px rgba(0, 0, 0, 0.5); \\n}\\n\\n
This will render a simple box with a shadow:
\\nbox-shadow
generatorTry adjusting the values below using our interactive box-shadow
generator. Modify parameters such as x-offset
, y-offset
, blur, and spread to see real-time changes. Once satisfied, copy the generated CSS for immediate use in your projects:
See the Pen <a href=\\"https://codepen.io/coded_fae/pen/emOwyro\\"> CSS Box Shadow Generator</a> by abiolaesther_ (<a href=\\"https://codepen.io/coded_fae\\">@coded_fae</a>) on <a href=\\"https://codepen.io\\">CodePen</a>.
\\nbox-shadow
use casesbox-shadow
with the :hover
pseudo-class and transform
propertyThe box-shadow
property can be dynamically modified using the :hover
pseudo-class. You can add a shadow to an element that previously had none or adjust an existing shadow. In this example, the transform
property enhances the illusion of depth:
.box:hover {\\n box-shadow: 0px 10px 20px 5px rgba(0, 0, 0, 0.5);\\n transform: translateY(-5px);\\n}\\n\\n
The transform
property makes it appear as though the box is lifting off the page. Conversely, using the inset
keyword places the shadow inside the element’s frame, giving the effect of it sinking into the page:
.box2 {\\n box-shadow: inset 0px 5px 10px 0px rgba(0, 0, 0, 0.5);\\n}\\n.box2:hover {\\n transform: translateY(5px);\\n box-shadow: inset 0px 10px 20px 2px rgba(0, 0, 0, 0.25);\\n}\\n\\n
You can experiment with these values to achieve the desired effect. Here’s what these shadows look like:
\\n\\nAn alternative to translate()
is scale()
, which increases the size of the element rather than repositioning it. In this example, the scale()
function enlarges the box when hovered:
.box2:hover {\\n transform: scale(1.1);\\n box-shadow: 0px 10px 20px 2px rgba(0, 0, 0, 0.25);\\n}\\n\\n
This effect scales the box to 1.1 times its original size:
\\n\\nbox-shadow
property with text-shadow
Like box-shadow
, the text-shadow
property allows you to define a shadow’s blur radius, color, and offset. This property lets you create visual effects that enhance text readability and aesthetics. Here’s the basic syntax:
.selector {\\n text-shadow: <horizontal-offset> <vertical-offset> <blur-radius> <color>;\\n}\\n\\n
While text-shadow
applies only to text elements, it can be combined with box-shadow
to add depth and dimension to UI components. Here’s an example:
<div class=\\"site-container\\">\\n <div class=\\"card\\">...</p>\\n</div>\\n\\n
In this example, both box-shadow
and text-shadow
enhance the .card
class. The two shadow layers create a neumorphic effect, while the text shadow adds contrast and visual appeal:
.card {\\n padding: 2rem;\\n border-radius: 0.5rem;\\n background: linear-gradient(145deg, #cacaca, #f0f0f0);\\n color: #764abc;\\n text-shadow: \\n -6px 6px 15px rgba(0, 0, 0, 0.5),\\n 6px -6px 15px rgba(255, 255, 255, 0.8);\\n box-shadow: \\n 20px 20px 60px #bebebe, \\n -20px -20px 60px white;\\n}\\n\\n
Here’s the result:
\\n\\nYou can stack multiple shadows by separating them with commas. This technique produces smooth, layered effects:
\\n.stacked-shadows {\\n box-shadow: 0px 1px 2px rgba(0,0,0,0.1), \\n 0px 2px 4px rgba(0,0,0,0.1), \\n 0px 4px 8px rgba(0,0,0,0.1), \\n 0px 8px 16px rgba(0,0,0,0.1);\\n}\\n\\n
Notice that the spread value isn’t included — it’s optional and depends on the desired effect. Alternatively, setting the offset and blur radius to 0px
while adding a spread value creates a border-like shadow:
.bordered-stacked-shadows {\\n box-shadow: 0px 0px 0px 2px rgba(0,0,0,0.5), \\n 0px 2px 4px rgba(0,0,0,0.1),\\n 0px 4px 8px rgba(0,0,0,0.1),\\n 0px 8px 16px rgba(0,0,0,0.1);\\n}\\n\\n
Since this border effect uses box-shadow
, it doesn’t add extra space to the element’s parent container:
The left box features a smooth, layered shadow, while the right box has a defined shadow border.
\\nNow, let’s look at the box-shadow
in a practical scenario. This property can be used on almost any element on a webpage, but the more common ones include the navbar, text cards, and images. It can also be added to input fields and buttons:
Build a simple webpage like the one shown in the demo, and try styling the box-shadow
yourself!
In the real world, shadows are usually black or gray with varying opacity. But what if shadows had colors? Colored shadows occur when the light source itself is colored. Since there’s no real equivalent of a light source in CSS, you can achieve this neon effect by adjusting the color value in box-shadow
.
Let’s modify our first example:
\\n.box{\\n box-shadow: 0px 5px 10px 0px rgba(0, 0, 0, 0.7); \\n}\\n.box2{\\n box-shadow: inset 0px 5px 10px 0px rgba(0, 0, 0, 0.7);\\n}\\n\\n
This is the output:
\\n\\nTo create a more vibrant glow, you can layer multiple shadows:
\\nbox-shadow: 0px 1px 2px 0px rgba(0,255,255,0.7),\\n 1px 2px 4px 0px rgba(0,255,255,0.7),\\n 2px 4px 8px 0px rgba(0,255,255,0.7),\\n 2px 4px 16px 0px rgba(0,255,255,0.7);\\n\\n
Neon shadows are best showcased on dark-themed web pages. Dark themes are widely popular, and when combined with contrasting colors, neon shadows can enhance the aesthetic.
\\nTo see this effect in action, we’ll adjust the earlier demo by darkening the background and experimenting with different shadow colors:
\\n\\nUsing colors that contrast well—like the blue box-shadow
against a dark background in this demo—ensures the effect is visually striking. Increasing the opacity makes the glow even brighter.
Neumorphism is a modern design trend derived from skeuomorphism, which replicates real-world objects in digital interfaces. This effect makes UI components appear to extrude from the background, creating a soft, three-dimensional look.
\\nTo achieve this, you can apply two opposite box-shadow
values:
.neumorphic-shadow {\\n box-shadow: \\n -10px -10px 15px rgba(255,255,255,0.5),\\n 10px 10px 15px rgba(70,70,70,0.12);\\n}\\n\\n
To create an inset effect, place the shadows inside the element:
\\n.neumorphic-shadow {\\n box-shadow: \\n inset -10px -10px 15px rgba(255, 255, 255, 0.5), \\n inset 10px 10px 15px rgba(70, 70, 70, 0.12);\\n}\\n\\n
In the example above, two shadows work in opposite directions. The white box-shadow
simulates the light source, acting as a highlight—similar to how light interacts with objects in real life:
Neumorphic design mimics real-world objects in a way that makes them feel tangible. Let’s take this a step further and create an interactive push switch using a checkbox:
\\n<input type=\\"checkbox\\" class=\\"neumorphic-checkbox\\" />\\n\\n
.neumorphic-switch {\\n display: flex;\\n align-items: center;\\n justify-content: center;\\n height: 200px;\\n width: 200px;\\n border-radius: 50%;\\n box-shadow: \\n -10px -10px 15px rgba(255, 255, 255, 0.5),\\n 10px 10px 15px rgba(70, 70, 70, 0.12);\\n border: 20px solid #ececec;\\n outline: none;\\n cursor: pointer;\\n -webkit-appearance: none;\\n}\\n\\n
We’ll use Font Awesome for the power button icon. Link the CDN and add the icon’s Unicode:
\\n.neumorphic-switch::after {\\n font-family: FontAwesome;\\n content: \\"\\\\f011\\"; /*ON/OFF icon Unicode*/\\n color: #7a7a7a;\\n font-size: 70px;\\n}\\n\\n
When clicked, the button will invert the shadow effect using two inset layers:
\\n.neumorphic-switch:checked{\\n box-shadow: \\n -10px -10px 15px rgba(255, 255, 255, 0.5),\\n 10px 10px 15px rgba(70, 70, 70, 0.12),\\n inset -10px -10px 15px rgba(255, 255, 255, 0.5),\\n inset 10px 10px 15px rgba(70, 70, 70, 0.12);\\n}\\n\\n
Finally, update the icon color when the switch is activated:
\\n.neumorphic-switch:checked::after{\\n color: #15e38a;\\n}\\n\\n
Here’s the final result:
\\n\\nbox-shadow
examplesThere are many different ways to use box-shadow
, depending on your design needs. Below is an interactive gallery showcasing various shadow styles along with their corresponding code snippets:
<br />\\nSee the Pen <a href=\\"https://codepen.io/coded_fae/pen/raNNKNZ\\"><br />\\n<span data-mce-type=\\"bookmark\\" style=\\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\\" class=\\"mce_SELRES_start\\">\ufeff</span>CSS Shadow Examples</a> by abiolaesther_ (<a href=\\"https://codepen.io/coded_fae\\">@coded_fae</a>)<br />\\n<span data-mce-type=\\"bookmark\\" style=\\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\\" class=\\"mce_SELRES_end\\">\ufeff</span>on <a href=\\"https://codepen.io\\">CodePen</a><br />\\n
\\nbox-shadow
with the View Transitions APIWith the View Transitions API, you can dynamically apply box-shadow
styles for smooth element and page transitions. Let’s explore how this works with a simple example that focuses on same-document transitions.
We’ll start by defining styles for a card component that expands and collapses when clicked:
\\n.card {\\n ...\\n box-shadow: 0 0 0.25rem 0.5rem rgba(0, 0, 0, .15);\\n}\\n\\n.card--collapsible {\\n height: 120px;\\n overflow: hidden;\\n view-transition-name: card;\\n}\\n\\n.card--expanded {\\n height: auto;\\n box-shadow: 0 1rem 2rem rgba(0, 0, 0, .35);\\n}\\n\\n
Using JavaScript’s classList
API, we can toggle the CSS class that controls box-shadow
styles:
const card = document.querySelector(\\".card\\");\\n\\ncard?.addEventListener(\\"click\\", () => {\\n card.classList.toggle(\\"card--elevated\\");\\n});\\n\\n
This logic can be used inside the startViewTransitions
method of the View Transitions API to enhance the effect:
As demonstrated in the demo, the View Transitions API applies a default cross-fading animation, making it appear as though the shadow itself is smoothly transitioning.
\\nIf you prefer handling styles via JavaScript, you can use the boxShadow
property:
const targetElement = document.querySelector(\\".my-element\\");\\ntargetElement?.style.boxShadow = \\"0 0 3px 4px hsl(25deg 50% 50% / 20%)\\"\\n\\n
Before implementing this in production, make sure to check the browser support for View Transtions, which is currently just above 74 percent.
\\nbox-shadow
with native CSS nestingWhen managing complex box-shadow
utilities, native CSS nesting helps reduce redundancy and improves maintainability.
For example, say you have a card component with different shadow styles for hover, active, and focus states. Instead of writing separate rules, you can use CSS nesting to simplify the structure:
\\n/* Simple card */\\n.card {\\n /* Card styles */\\n box-shadow: ...;\\n &:hover {\\n box-shadow: ...;\\n }\\n &:active {\\n box-shadow: ...;\\n }\\n}\\n\\n/* Elevated card variation */\\n.card--elevated {\\n /* Elevated card styles */\\n box-shadow: ...;\\n\\n &:hover { ... }\\n &:active { ... }\\n}\\n\\n
This structure is easier to manage and maintain compared to separate rule sets. We can take this one step further with cascade layers, another important modern CSS feature covered in the next section.
\\nbox-shadow
in @layer
blocksThe @layer
rule in CSS helps control specificity issues and maintain cleaner styles. You can use it to structure box-shadow
utilities in a more organized way.
The first line below establishes the specificity order, ensuring that utility styles override component styles:
\\ncss\\nCopyEdit\\n@layer base, components, utilities;\\n\\n
css\\nCopyEdit\\n@layer base {\\n :root {\\n --shadow-sm: 0 1px 2px rgba(0, 0, 0, 0.1);\\n --shadow-md: 0 1px 3px rgba(0, 0, 0, 0.15);\\n --shadow-lg: 0 4px 6px rgba(0, 0, 0, 0.2);\\n\\n @media (prefers-color-scheme: dark) {\\n --shadow-sm: 0 1px 2px rgba(0, 0, 0, 0.2);\\n --shadow-md: 0 1px 3px rgba(0, 0, 0, 0.3);\\n --shadow-lg: 0 4px 6px rgba(0, 0, 0, 0.4);\\n }\\n }\\n}\\n\\n
box-shadow
styles within component and utility layerscss\\nCopyEdit\\n@layer components {\\n .card {\\n box-shadow: var(--shadow-md);\\n\\n &:hover { box-shadow: var(--shadow-lg); }\\n &:active { box-shadow: var(--shadow-sm); }\\n }\\n}\\n\\n@layer utilities {\\n .shadow-sm { box-shadow: var(--shadow-sm); }\\n .shadow-md { box-shadow: var(--shadow-md); }\\n .shadow-lg { box-shadow: var(--shadow-lg); }\\n}\\n\\n
Because utility styles are declared later in the specificity order, they automatically override component styles when necessary.
\\n\\nThis layering strategy enhances maintainability, improves organization, and makes integrating styles with CSS frameworks much easier.
\\nbox-shadow
generatorWe have explored various use cases for the CSS box-shadow
property. If you want to go further and try out even more styles, you can experiment with this interactive box-shadow generator below:
See the Pen <a href=\\"https://codepen.io/coded_fae/pen/emOwyro\\"> CSS Box Shadow Generator</a> by abiolaesther_ (<a href=\\"https://codepen.io/coded_fae\\">@coded_fae</a>) on <a href=\\"https://codepen.io\\">CodePen</a>.
\\nAccording to Can I use, the box-shadow
CSS property is fully supported across all modern browsers, including their latest released versions.
box-shadow
best practicesThe box-shadow
property is a powerful way to enhance the visual appeal of your website, but improper use can negatively impact performance and design. Here are some best practices to keep in mind:
- Less is more – When layering multiple shadows, the browser has to perform more rendering work. This may not be an issue on high-end devices, but users with older hardware or slow internet connections might experience lag\\n- Be consistent – Avoid using inconsistent shadow styles. Shadows should follow a single light source to maintain a cohesive and realistic design\\n- Use animations sparingly – Animating box-shadow can significantly impact performance. Since box-shadow already enhances UI elements, keep animations minimal, such as a subtle transition effect on :hover\\n- Use a shadow layering tool – Instead of manually writing multiple shadow values, use tools like [shadows.brumm.af](https://shadows.brumm.af/). This tool allows you to generate and adjust up to 10 box-shadow layers, making it easier to achieve complex and refined shadow effects\\n\\n
In this article, we explored various techniques for using the box-shadow
CSS property, including:
box-shadow
with the View Transitions API, native CSS nesting, and cascade layersbox-shadow
with text-shadow
to create well-rounded visual effectsNow, you’re well on your way to mastering box-shadow
! The best way to improve is through hands-on experimentation. Try using an inline box-shadow generator to see how many shadow layers you can stack, experiment with different color combinations, and test your designs across multiple devices to ensure optimal performance.
Would you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nWhen building React Native applications, one of the recurring challenges is managing constants. Whether it’s navigation routes, theme colors, or application states, relying on hardcoded values scattered throughout your codebase can lead to errors, poor readability, and maintenance headaches.
\\nThis is where TypeScript enums come in. Enums allow you to define a set of named values that give your code structure, improve readability, and make debugging a breeze.
\\nIn this detailed guide, I’ll walk you through what enums are, why you should use them, and show you step-by-step examples in React Native. I’ll explain each example in simple, relatable terms, ensuring you understand not just how to use enums, but why they matter. I’ll also cover best practices, alternatives like union types, and when to avoid enums.
\\nAn enum (short for enumeration) is a TypeScript feature that lets you define a collection of related values under one name. Instead of repeating strings or numbers throughout your code, enums give those values meaningful names.
\\nThere are two main types of enums in TypeScript:
\\nHere’s how they look:
\\n// Numeric Enum\\nenum Status {\\n Active = 1, // Starts at 1\\n Inactive, // Automatically becomes 2\\n Archived, // Automatically becomes 3\\n}\\n\\n// String Enum\\nenum Theme {\\n Light = \'LIGHT\', // Explicitly assigns \\"LIGHT\\"\\n Dark = \'DARK\', // Explicitly assigns \\"DARK\\"\\n}\\n
Enums are especially useful when you have a small, fixed set of values that you know won’t change frequently. This makes them a perfect fit for many use cases in React Native.
\\nEnums offer several benefits that directly impact the quality of your code:
\\nInstead of using arbitrary strings or numbers, enums provide clear, descriptive names. This makes your code self-explanatory.
\\nWithout enums:
\\nif (route === \'HomeScreen\') { ... }\\n
With enums:
\\nif (route === Routes.Home) { ... }\\n
One of the biggest advantages of using TypeScript enums in React Native is the built-in type safety. With plain strings or numbers, it’s easy to introduce typos, inconsistent naming, or invalid values, errors that can slip through unnoticed until they cause unexpected behavior at runtime.
\\nBut with enums, TypeScript acts like a strict gatekeeper, ensuring that only valid values are used. If you try to assign a value that’s not part of the enum, TypeScript will immediately throw an error during development, saving you from runtime crashes and hours of debugging.
\\nLet’s say you’re handling user authentication states in your app:
\\nenum AuthStatus {\\n LoggedIn = \\"LOGGED_IN\\",\\n LoggedOut = \\"LOGGED_OUT\\",\\n Pending = \\"PENDING\\",\\n}\\n\\n// Function expecting an AuthStatus enum\\nfunction handleAuth(status: AuthStatus) {\\n if (status === AuthStatus.LoggedIn) {\\n console.log(\\"User is logged in.\\");\\n }\\n}\\n\\n// ❌ Incorrect value (throws an error at compile-time)\\nhandleAuth(\\"LOGGED-IN\\"); // TypeScript Error: Argument of type \'\\"LOGGED-IN\\"\' is not assignable to parameter of type \'AuthStatus\'.\\n\\n// ✅ Correct usage\\nhandleAuth(AuthStatus.LoggedIn);\\n
In a JavaScript-only project, this typo (LOGGED-IN
instead of LOGGED_IN
) wouldn’t be caught until the app runs, potentially leading to broken logic. But with TypeScript enums, the error is flagged immediately, helping you catch issues early.
Why does this matter? TypeScript enums can help eliminate the risk of silent failures due to typos, provide clear auto-completion in IDEs (making coding faster), and ensure that only expected values are passed into functions, reducing runtime errors.
\\nAs your app grows, managing hardcoded values scattered across different files becomes a nightmare. Imagine manually updating screen names, theme colors, or API statuses across dozens of components. Not only is it tedious, but the chances of missing a reference are high, leading to inconsistencies and bugs.
\\nEnums solve this by offering a single source of truth. Instead of manually updating values in multiple places, you define them once in an enum and reference them everywhere. Change it in one place, and it updates across the entire app.
\\nFor instance, let’s think about the context of centralizing navigation routes:
\\n// Define all route names in one place\\nenum Routes {\\n Home = \\"HomeScreen\\",\\n Profile = \\"ProfileScreen\\",\\n Settings = \\"SettingsScreen\\",\\n}\\n\\n// Using enums in React Navigation\\n<Stack.Screen name={Routes.Home} component={HomeScreen} />\\n<Stack.Screen name={Routes.Profile} component={ProfileScreen} />\\n<Stack.Screen name={Routes.Settings} component={SettingsScreen} />\\n
Now, if you ever need to rename HomeScreen
to MainScreen
, you only update it in the Routes enum, and it applies everywhere automatically.
This will help prevent inconsistencies, reducing typos or mismatched route names. It also contributes to easier refactoring (changing a value is quicker and less risky), and better code organization, as constants are clearly grouped, making the codebase more readable.
\\nString enums, in particular, make debugging easier by providing meaningful values in logs. The logs always show standardized values, reducing confusion. The risk of logging incorrect or unexpected values is minimized, and if a mistake is made, TypeScript flags it during development, rather than letting it break the app in production.
\\nLet’s dive into some practical, real-world examples. I’ll explain each part of the code so you can see how enums make your life easier.
\\nWhen building an app, you’ll often define multiple screens. Hardcoding route names like HomeScreen
everywhere can lead to typos or inconsistent naming. By using enums, you can define all routes in one place and reference them across your app:
// Define navigation routes using an enum\\nenum Routes {\\n Home = \'HomeScreen\',\\n Profile = \'ProfileScreen\',\\n Settings = \'SettingsScreen\',\\n}\\n\\n// React Navigation setup\\nimport React from \'react\';\\nimport { NavigationContainer } from \'@reactnavigation/native\';\\nimport { createStackNavigator } from \'@reactnavigation/stack\';\\nimport { Text } from \'react-native\';\\n\\nconst Stack = createStackNavigator();\\n\\n// Screens\\nconst HomeScreen = () => <Text>Welcome to the Home Screen!</Text>;\\nconst ProfileScreen = () => <Text>This is your Profile</Text>;\\nconst SettingsScreen = () => <Text>Here are your Settings</Text>;\\n\\nconst App = () => {\\n return (\\n <NavigationContainer>\\n <Stack.Navigator>\\n {/ Use the Routes enum to define screen names /}\\n <Stack.Screen name={Routes.Home} component={HomeScreen} />\\n <Stack.Screen name={Routes.Profile} component={ProfileScreen} />\\n <Stack.Screen name={Routes.Settings} component={SettingsScreen} />\\n </Stack.Navigator>\\n </NavigationContainer>\\n );\\n};\\n\\nexport default App;\\n\\n
Here’s what you might notic; it helps keep your code organized, reduces errors, and improves readability. Below are a few key benefits:
\\nHomeScreen
to something else, you only update it in the enumRoutes.Home
, you ensure the same name is used everywhere, reducing bugsRoutes.Home
represents compared to a raw stringIf you’re implementing light and dark themes in your app, you can use an enum to define your color palette instead of hardcoding color values directly in components. This makes it easy to manage and switch themes:
\\n// Define theme colors using an enum\\nenum Colors {\\n Primary = \'\',\\n Secondary = \'FFC107\',\\n BackgroundLight = \'FFFFFF\',\\n BackgroundDark = \'121212\',\\n TextLight = \'000000\',\\n TextDark = \'FFFFFF\',\\n}\\n\\n// Apply colors in a React Native component\\nimport React from \'react\';\\nimport { View, Text, StyleSheet } from \'reactnative\';\\n\\nconst App = () => {\\n const isDarkMode = true; // Simulate a dark mode toggle\\n\\n return (\\n <View\\n style={[\\n styles.container,\\n { backgroundColor: isDarkMode ? Colors.BackgroundDark : Colors.BackgroundLight },\\n ]}\\n >\\n <Text\\n style={{\\n color: isDarkMode ? Colors.TextDark : Colors.TextLight,\\n }}\\n >\\n Enums make theme management easy!\\n </Text>\\n </View>\\n );\\n};\\n\\nconst styles = StyleSheet.create({\\n container: {\\n flex: 1,\\n justifyContent: \'center\',\\n alignItems: \'center\',\\n },\\n});\\n\\nexport default App;\\n
The benefits of this include:
\\nColors
enum contains all the color values used in your app. If you want to update the primary color, you do it once in the enumisDarkMode
variable dynamically switches between light and dark themes, with the enum handling the logic cleanlyColors
enum without touching multiple componentsEnums are also helpful for managing application states, like a form submission process that includes multiple stages (idle
, submitting
, success
, or error
):
// Define form states as an enum\\nenum FormState {\\n Idle = \'IDLE\',\\n Submitting = \'SUBMITTING\',\\n Success = \'SUCCESS\',\\n Error = \'ERROR\',\\n}\\n\\n// Use enums to manage form states\\nimport React, { useState } from \'react\';\\nimport { View, Text, Button } from \'reactnative\';\\n\\nconst App = () => {\\n const [formState, setFormState] = useState<FormState>(FormState.Idle);\\n\\n const handleSubmit = () => {\\n setFormState(FormState.Submitting);\\n // Simulate an API call\\n setTimeout(() => {\\n setFormState(FormState.Success); // Update state to success\\n }, 2000);\\n };\\n\\n return (\\n <View>\\n {formState === FormState.Idle && <Button title=\\"Submit\\" onPress={handleSubmit} />}\\n {formState === FormState.Submitting && <Text>Submitting...</Text>}\\n {formState === FormState.Success && <Text>Form Submitted Successfully!</Text>}\\n {formState === FormState.Error && <Text>Error Submitting Form</Text>}\\n </View>\\n );\\n};\\n\\nexport default App;\\n
This makes form states clearer, transitions more predictable, and conditions easier to read. Here’s how:
\\nformState === \'submitting
, you check formState === FormState.Submitting
, which is easier to understandThe following tactics will help you make the best use of enums in React Native:
\\nRather than throwing all enums into a single file, organize them based on their purpose. This makes your code easier to navigate and maintain.
\\nA few examples:
\\nRoutes.ts
– For screen names in navigationTheme.ts
– For managing theme colorsFormStates.ts
– For tracking form submission statusKeeping enums separate prevents clutter and helps avoid unintended dependencies.
\\nEnums should be stored in well-named files and include comments explaining their purpose. This helps teammates (or your future self) understand them at a glance.
\\nExample:
\\n// enums/UserRoles.ts\\n\\n/**\\n * Defines different user roles within the app.\\n */\\nenum UserRole {\\n Admin = \\"ADMIN\\",\\n Editor = \\"EDITOR\\",\\n Viewer = \\"VIEWER\\",\\n}\\n
This structure makes it clear what the enum is for, without having to dig through unrelated code.
\\n\\nNaming matters. Stick to PascalCase for enum names and UPPER_CASE for values to keep things readable.
\\nGood practice:
\\nenum PaymentStatus {\\n Pending = \\"PENDING\\",\\n Completed = \\"COMPLETED\\",\\n Failed = \\"FAILED\\",\\n}\\n
Bad practice:
\\nenum paymentstatus {\\n pending = \\"pending\\",\\n completed = \\"completed\\",\\n failed = \\"failed\\",\\n}\\n
Following a naming convention keeps your enums easy to read and reduces confusion.
\\nEnums are best for values that won’t change often. If you expect frequent updates (like a list of product categories from an API), consider using objects or union types instead.
\\nGood for enums:
\\nenum NotificationType {\\n Success = \\"SUCCESS\\",\\n Error = \\"ERROR\\",\\n Warning = \\"WARNING\\",\\n}\\n
Bad for enums (better as a dynamic list):
\\nenum ProductCategory {\\n Electronics = \\"ELECTRONICS\\",\\n Clothing = \\"CLOTHING\\",\\n Home = \\"HOME\\",\\n}\\n
If new product categories can be added over time, using an enum makes updates harder.
\\nBy applying these best practices, enums will stay organized, readable, and easy to manage, without adding unnecessary complexity to your code.
\\nWhile enums are powerful, they aren’t always the best choice. For example, if you have a dynamic set of values or prefer a simpler approach, union types might be a better fit. Union types in TypeScript allow a variable to accept only a predefined set of values, ensuring strict type safety while avoiding runtime overhead:
\\ntype ScreenRoutes = \\"HomeScreen\\" | \\"ProfileScreen\\" | \\"SettingsScreen\\";\\n
There are a few key advantages to using union types:
\\nEnums require explicit declarations, additional syntax, and often a separate file for organization. Union types, on the other hand, let you define all valid values directly without extra setup. This makes your code more concise and self-explanatory. Instead of navigating to an enum file to check what values are allowed, union types keep everything in plain sight, making it easier to read and maintain. That makes union types ideal for small sets of values that don’t need complex mappings.
\\nEnums compile into JavaScript objects, meaning they add extra code that exists at runtime. In most cases, this is negligible. But in performance-sensitive applications, every extra bit of JavaScript matters.
\\nUnion types, on the other hand, only exist in TypeScript. They disappear at runtime, leaving behind just raw string values in the compiled JavaScript. This keeps your app’s bundle size smaller and removes unnecessary processing. Union types, therefore, are ideal for large-scale applications where performance and minimal runtime code are paramount.
\\nOne of the biggest limitations of enums is that they are static; you define them once, and they cannot change dynamically. If your app pulls configurations, categories, or feature flags from an API, enums won’t be flexible enough. Union types, however, can easily integrate with dynamically generated values. This is especially useful when dealing with external data sources that might introduce new options over time.
\\nUnion types are typically best used in two common scenarios:
\\nTypeScript enums are an essential tool for creating robust, readable, and maintainable React Native applications. By using enums for navigation routes, color schemes, and application states, you can reduce bugs, make your code easier to understand, and simplify updates.
\\nWith these examples, best practices, and alternatives, you’re ready to start using enums effectively in your React Native projects. Embrace enums, and watch your code become cleaner, safer, and more organized. Happy coding!
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nref
and scrollTop
API\\n Editor’s note: This article was last updated by Elijah Agbonze on 18 February 2025.
\\nInfinite scrolling is a powerful technique that improves user experience by loading content dynamically. In this guide, we’ll walk through three different ways to implement infinite scrolling in React, including both custom-built and library-based solutions.
\\nThe most common method to implement infinite scroll is to use prebuilt React libraries such as react-infinite-scroll-component
and react-window-infinite-loader
:
<span class=\\"hljs-keyword\\">import</span> InfiniteScroll <span class=\\"hljs-keyword\\">from</span> \\"react-infinite-scroll-component\\";\\n<span class=\\"hljs-keyword\\">import</span> { useState, useEffect } <span class=\\"hljs-keyword\\">from</span> \'react\'; \\n\\n<span class=\\"hljs-keyword\\">function</span> App() {\\n <span class=\\"hljs-keyword\\">const</span> [page, setPage] = useState(<span class=\\"hljs-number\\">1</span>);\\n <span class=\\"hljs-keyword\\">const</span> [products, setProducts] = useState<any[]>([]);\\n <span class=\\"hljs-keyword\\">const</span> [totalProducts, setTotalProducts] = useState(<span class=\\"hljs-number\\">0</span>);\\n\\n <span class=\\"hljs-keyword\\">const</span> fetchData = async (page: <span class=\\"hljs-built_in\\">number</span>) => {\\n <span class=\\"hljs-keyword\\">try</span> {\\n <span class=\\"hljs-keyword\\">const</span> res = await fetch(\\n `https://dummyjson.com/products/?limit=10&skip=${(page - 1) * 10}`\\n );\\n <span class=\\"hljs-keyword\\">const</span> data = await res.json();\\n <span class=\\"hljs-keyword\\">if</span> (res.ok) {\\n setProducts((prevItems) => [...prevItems, ...data.products]);\\n page === <span class=\\"hljs-number\\">1</span> && setTotalProducts(() => data.total);\\n }\\n } <span class=\\"hljs-keyword\\">catch</span> (error) {\\n console.log(error)\\n }\\n };\\n\\n useEffect(() => {\\n <span class=\\"hljs-keyword\\">let</span> subscribed = <span class=\\"hljs-literal\\">true</span>;\\n (async () => {\\n <span class=\\"hljs-keyword\\">if</span> (subscribed) {\\n await fetchData(<span class=\\"hljs-number\\">1</span>);\\n }\\n })();\\n <span class=\\"hljs-keyword\\">return</span> () => {\\n subscribed = <span class=\\"hljs-literal\\">false</span>;\\n };\\n }, []);\\n\\n <span class=\\"hljs-keyword\\">const</span> handleLoadMoreData = () => {\\n setPage((prevPage) => {\\n <span class=\\"hljs-keyword\\">const</span> nextPage = prevPage + <span class=\\"hljs-number\\">1</span>;\\n fetchData(nextPage);\\n <span class=\\"hljs-keyword\\">return</span> nextPage;\\n });\\n };\\n \\n <span class=\\"hljs-keyword\\">return</span> (\\n <InfiniteScroll\\n dataLength={products.length}\\n next={handleLoadMoreData}\\n hasMore={totalProducts > products.length}\\n loader={<p>Loading...</p>}\\n endMessage={<p>No more data to load.</p>}\\n >\\n <div>\\n {products.map((product) => (\\n <div key={product.id}>\\n <h2>\\n {product.title} - {product.id}\\n </h2>\\n </div>\\n ))}\\n </div>\\n </InfiniteScroll>\\n );\\n}\\n\\n
However, there are multiple ways to achieve the infinite scroll effect. In this guide, we’ll explore three unique approaches to implementing infinite scrolling in React applications:
\\nreact-infinite-scroll-component
and react-window-infinite-loader
, which helps save time and effort while still offering customization optionsInfinite scroll eliminates the need for traditional pagination. Instead of navigating through many pages, users can scroll nonstop to view more content, making the experience more engaging and intuitive.
\\nInfinite scroll is widely used in social media platforms like Instagram, X, and TikTok, enabling users to endlessly browse through feeds of images and videos without interruption.
\\nNow that we’ve established what infinite scroll is, let’s proceed with the tutorial.
\\nFirst we would set up a foundation that will be consistently implemented across all the various infinite scrolling techniques.
\\nTo get started, let’s first set up a React application using Vite. In your terminal, run the following commands:
\\nnpm create vite@latest ecommerce-app -- --template react-ts\\ncd ecommerce-app\\nnpm i\\n\\n
The commands above will generate a React application in TypeScript and install all dependencies. Next, we’ll set up the initial state for our component in App.tsx
. This includes the list of items to display, the necessary loading and error indicators, and a variable to track the total products available:
<span class=\\"hljs-comment\\">// App.tsx</span>\\n<span class=\\"hljs-keyword\\">import</span> React, { useState, useEffect } <span class=\\"hljs-keyword\\">from</span> \'react\';\\n\\ntype ProductItem = {\\n id: <span class=\\"hljs-built_in\\">number</span>;\\n title: <span class=\\"hljs-built_in\\">string</span>;\\n description: <span class=\\"hljs-built_in\\">string</span>;\\n category: <span class=\\"hljs-built_in\\">string</span>;\\n price: <span class=\\"hljs-built_in\\">number</span>;\\n rating: <span class=\\"hljs-built_in\\">number</span>;\\n thumbnail: <span class=\\"hljs-built_in\\">string</span>;\\n brand: <span class=\\"hljs-built_in\\">string</span>;\\n discountPercentage: <span class=\\"hljs-built_in\\">number</span>;\\n};\\n\\n<span class=\\"hljs-keyword\\">function</span> App() {\\n <span class=\\"hljs-keyword\\">const</span> [loading, setLoading] = useState(<span class=\\"hljs-literal\\">true</span>);\\n <span class=\\"hljs-keyword\\">const</span> [products, setProducts] = useState<ProductItem[]>([]);\\n <span class=\\"hljs-keyword\\">const</span> [totalProducts, setTotalProducts] = useState(<span class=\\"hljs-number\\">0</span>);\\n <span class=\\"hljs-keyword\\">const</span> [error, setError] = useState<null|Error>(null);\\n <span class=\\"hljs-comment\\">// rest of component</span>\\n}\\n\\n
Next, we’ll create a function to fetch data from an API, increment the page number, and update the state with the fetched items. Additionally, we’ll handle any errors during the data fetching process:
\\n<span class=\\"hljs-keyword\\">const</span> fetchData = async (page: <span class=\\"hljs-built_in\\">number</span>) => {\\n <span class=\\"hljs-keyword\\">try</span> {\\n setLoading(<span class=\\"hljs-literal\\">true</span>);\\n <span class=\\"hljs-keyword\\">const</span> res = await fetch(\\n `https://dummyjson.com/products/?limit=10&skip=${(page - 1) * 10}`\\n );\\n <span class=\\"hljs-keyword\\">const</span> data = await res.json();\\n <span class=\\"hljs-keyword\\">if</span> (res.ok) {\\n setProducts((prevItems) => [...prevItems, ...data.products]);\\n page === <span class=\\"hljs-number\\">1</span> && setTotalProducts(() => data.total); <span class=\\"hljs-comment\\">// only set this once</span>\\n }\\n setLoading(<span class=\\"hljs-literal\\">false</span>);\\n } <span class=\\"hljs-keyword\\">catch</span> (error) {\\n setLoading(<span class=\\"hljs-literal\\">false</span>);\\n <span class=\\"hljs-keyword\\">if</span> (error <span class=\\"hljs-keyword\\">instanceof</span> Error) {\\n setError(error);\\n }\\n }\\n};\\n\\n
For this tutorial, we will be using the DummyJSON products API. DummyJSON doesn’t offer an explicit page
param. Instead, it uses limit
and skip
for rendering pagination. limit
is the maximum number of products we want per API call, and skip
is the number of items we intend to skip for each page, which in our case would be the previous page multiplied by 10.
fetchData
on component mountNext, we’ll use the useEffect
Hook to call the fetchData
function when the component mounts initially:
useEffect(() => {\\n let subscribed = <span class=\\"hljs-literal\\">true</span>;\\n (async () => {\\n if (subscribed) {\\n await fetchData(<span class=\\"hljs-number\\">1</span>);\\n }\\n })();\\n\\n return () => {\\n subscribed = <span class=\\"hljs-literal\\">false</span>;\\n };\\n}, []);\\n\\n
In a useEffect
hook, we want to make sure we clean up the asynchronous function call to avoid state updates when the component has unmounted. AbortController
is another method of unsubscribing from a fetch call. You can learn more on how to clean up React’s useEffect hook.
ProductCard
componentFinally, we will create the ProductCard
component that would be used for each product and the CSS styles it needs. So go ahead and create a component
directory inside src
. In that, create a ProductCard.tsx
file and paste the code below:
export const ProductCard = ({ product }: { product: ProductItem }) => {\\n const discountedPrice = (\\n product.price -\\n (product.price * product.discountPercentage) / 100\\n ).toFixed(<span class=\\"hljs-number\\">2</span>);\\n\\n return (\\n <div className=\\"product-card\\">\\n <img\\n src={product.thumbnail}\\n alt={product.title}\\n className=\\"product-image\\"\\n />\\n <div className=\\"product-info\\">\\n <h2 className=\\"product-title\\">\\n {product.title} - {product.id}\\n </h2>\\n <span className=\\"product-category\\">{product.category}</span>\\n {product.brand && (\\n <span className=\\"product-brand\\">{product.brand}</span>\\n )}\\n <p className=\\"product-description\\">{product.description}</p>\\n <div className=\\"product-props\\">\\n <div className=\\"product-price\\">\\n ${discountedPrice}\\n <span className=\\"product-original-price\\">\\n ${product.price.toFixed(<span class=\\"hljs-number\\">2</span>)}\\n </span>\\n </div>\\n <div className=\\"product-rating\\">\\n <span className=\\"star-rating\\">{\\"★\\"}</span>\\n <span>{Math.floor(product.rating)}</span>\\n </div>\\n </div>\\n <button className=\\"add-to-cart\\">Add to Cart</button>\\n </div>\\n </div>\\n );\\n};\\n\\n
The Vite template we used in creating the React app comes with two CSS files by default, but we only need one. Delete the App.css
and its import in App.tsx
. In index.css
, replace the styles there with the ones below:
@import url(\\"https://fonts.googleapis.com/css2?family=Poppins:wght@300;400;500;600;700&display=swap\\");\\n\\n.App {\\n font-family: \\"Poppins\\", sans-serif;\\n}\\n.products-list {\\n display: grid;\\n grid-template-columns: 1fr;\\n grid-gap: 10px;\\n max-width: 768px;\\n margin: 0 auto;\\n}\\n@media screen and (min-width: 768px) {\\n .products-list {\\n grid-template-columns: 1fr 1fr;\\n }\\n}\\n.product-card {\\n background-color: white;\\n border-radius: 8px;\\n box-shadow: 0 2px 10px rgba(0, 0, 0, 0.1);\\n max-width: 400px;\\n width: 100%;\\n overflow: hidden;\\n font-family: \\"Poppins\\", sans-serif;\\n}\\n.product-image {\\n width: 100%;\\n height: 250px;\\n object-fit: cover;\\n}\\n.product-info {\\n padding: 20px;\\n}\\n.product-title {\\n font-size: 24px;\\n margin: 0 0 10px;\\n}\\n.product-category,\\n.product-brand {\\n display: inline-block;\\n background-color: #e0e0e0;\\n padding: 5px 10px;\\n border-radius: 15px;\\n font-size: 12px;\\n margin-right: 5px;\\n}\\n.product-description {\\n font-size: 14px;\\n color: #666;\\n margin: 10px 0;\\n}\\n.product-props {\\n display: flex;\\n justify-content: space-between;\\n align-items: center;\\n}\\n.product-price {\\n font-size: 24px;\\n font-weight: bold;\\n margin: 10px 0;\\n}\\n.product-original-price {\\n text-decoration: line-through;\\n color: #999;\\n font-size: 16px;\\n margin-left: 10px;\\n}\\n.product-rating {\\n display: flex;\\n align-items: center;\\n margin: 10px 0;\\n}\\n.star-rating {\\n color: #ffd700;\\n font-size: 18px;\\n margin-right: 5px;\\n}\\n.add-to-cart {\\n display: block;\\n width: 100%;\\n padding: 10px;\\n background-color: #4caf50;\\n color: white;\\n border: none;\\n border-radius: 4px;\\n font-size: 16px;\\n cursor: pointer;\\n margin-top: 20px;\\n}\\n.add-to-cart:hover {\\n background-color: #45a049;\\n}\\n\\n
These foundational steps will be present in all the techniques we discuss in this article. We’ll modify and expand upon them as a base.
\\nBuilding the entire infinite scroll implementation from scratch involves handling the scroll event, loading more data, and updating the state in your React application. This approach provides you with full control over customization and functionality.
\\nTo get started, create a component FromScratch.tsx
in the components directory, and initialize:
import { useEffect, useState } from \\"react\\";\\nimport { ProductCard } from \\"./ProductCard\\";\\nimport { ProductItem } from \\"../types\\";\\n\\nexport const FromScratch = ({\\n products,\\n fetchData,\\n loading,\\n error\\n}: {\\n products: ProductItem[];\\n fetchData: (page: number) => Promise<void>;\\n loading: boolean;\\n error: null|Error\\n}) => {\\n const [page, setPage] = useState(<span class=\\"hljs-number\\">1</span>);\\n \\n <span class=\\"hljs-comment\\">// scroll logic </span>\\n\\n return (\\n <div>\\n <div className=\\"products-list\\">\\n {products.map((product, index) => (\\n <ProductCard product={product} key={index} />\\n ))}\\n </div>\\n {loading && <p>Loading...</p>}\\n {error && <p>Error: {error.message}</p>}\\n </div>\\n );\\n};\\n\\n
Next, we’ll create a function to handle the scroll
event. This function will check if the user has reached the bottom of the page and call fetchData
if necessary. We’ll add a scroll
event listener to the window
object and remove it when the component is unmounted. In place of the // scroll logic
comment, add this:
const handleScroll = () => {\\n const bottom =\\n Math.ceil(window.innerHeight + window.scrollY) >=\\n document.documentElement.scrollHeight - 200;\\n if (bottom) {\\n setPage((prevPage) => {\\n const nextPage = prevPage + 1;\\n fetchData(nextPage);\\n return nextPage;\\n });\\n }\\n};\\n\\nuseEffect(() => {\\n window.addEventListener(\\"scroll\\", handleScroll);\\n return () => {\\n window.removeEventListener(\\"scroll\\", handleScroll);\\n };\\n}, []);\\n\\n
If you’re wondering why the fetchData
is called inside the setPage
callback, it is simply because the fetchData
function is a side effect of the page
state. With this we’re making sure the fetchData
has access to the most recent page
state.
React’s state updates are asynchronous, so relying on the updated state immediately after calling the setState function can lead to race conditions. Another alternative is to use the useEffect
hook to track when the page
has been updated, but in this case, we don’t have that luxury.
Next, let’s import our newly created component into App.tsx
and run the app:
<span class=\\"hljs-comment\\">// App.tsx</span>\\n<div>\\n <FromScratch\\n products={products}\\n fetchData={fetchData}\\n loading={loading}\\n error={error}\\n />\\n</div>\\n\\n
You can run the app with:
\\nnpm run dev\\n\\n
And in your browser you should have something like this:
\\nIf you’re not creating along with this tutorial, then you can follow up with the Codesandbox app.
\\nOur handleScroll
function performs a check for the bottom of the page every time the user scrolls, which we don’t want. Instead, we only want to check for the bottom when the user stops scrolling. Debouncing in React in our case creates a cooling period where it waits until the user has stopped scrolling before executing our check function:
<span class=\\"hljs-comment\\">// FromScratch.tsx</span>\\nconst debounce = (func: (args: any) => void, delay: number) => {\\n let timeoutId: ReturnType<typeof setTimeout>;\\n\\n return function (...args: any) {\\n if (timeoutId) {\\n clearTimeout(timeoutId);\\n }\\n\\n timeoutId = setTimeout(() => {\\n func(args);\\n }, delay);\\n };\\n};\\n\\n
Next we’ll update the handleScroll
to make use of the debounce
function:
const handleScroll = debounce(() => {\\n const bottom =\\n Math.ceil(window.innerHeight + window.scrollY) >=\\n document.documentElement.scrollHeight - 200;\\n\\n if (bottom) {\\n setPage((prevPage) => {\\n const nextPage = prevPage + 1;\\n fetchData(nextPage);\\n return nextPage;\\n });\\n }\\n}, 300);\\n\\n
The purpose of clearing the timeout in the debounce function is to cancel previous timeouts when scrolling resumes.
\\nIn a full-scale application, you’d most likely need infinite scroll in multiple pages; in this section we’ll make the scroll function a reusable custom hook. So create a hooks
directory in src
and create a useInfiniteScroll.ts
file:
import { useEffect, useState } from \\"react\\";\\nconst debounce = (func: (args: any) => void, delay: number) => {\\n let timeoutId: ReturnType<typeof setTimeout>;\\n return function (...args: any) {\\n if (timeoutId) {\\n clearTimeout(timeoutId);\\n }\\n timeoutId = setTimeout(() => {\\n func(args);\\n }, delay);\\n };\\n};\\n\\nexport const useInfiniteScroll = (fetchData: (page: number) => Promise<void>) => {\\n const [page, setPage] = useState(<span class=\\"hljs-number\\">1</span>);\\n \\n const handleScroll = debounce(() => {\\n const bottom =\\n Math.ceil(window.innerHeight + window.scrollY) >=\\n document.documentElement.scrollHeight - 200;\\n if (bottom) {\\n setPage((prevPage) => {\\n const nextPage = prevPage + 1;\\n fetchData(nextPage);\\n return nextPage;\\n });\\n }\\n }, 300);\\n\\n useEffect(() => {\\n window.addEventListener(\\"scroll\\", handleScroll);\\n return () => {\\n window.removeEventListener(\\"scroll\\", handleScroll);\\n };\\n }, []);\\n};\\n\\n
Now in FromScratch.tsx
we can replace the scroll logic with:
export const FromScratch = ({\\n products,\\n fetchData,\\n loading,\\n error,\\n}: {\\n products: ProductItem[];\\n fetchData: (page: number) => Promise<void>;\\n loading: boolean;\\n error: null | Error;\\n}) => {\\n useInfiniteScroll(fetchData);\\n\\n <span class=\\"hljs-comment\\">// rest of component</span>\\n};\\n\\n
With that, we have a fully functional infinite scroll implementation built from scratch. This approach allows for extensive customization and more control over functionality. However, it may be more time-consuming and, as we’ve seen, requires more maintenance than using an existing library or component.
\\n\\nUsing an existing infinite scroll library or component can save time and effort as you leverage pre-built and pre-tested solutions while retaining customization options. We will cover two of these libraries in this section.
\\nreact-infinite-scroll-component is a popular library for implementing infinite scrolling in React. Let’s learn how to use this library to create infinite scrolling in our e-commerce application. First, install react-infinite-scroll-component:
\\nnpm install react-infinite-scroll-component\\n\\n
Now we can create a new component within the components
directory, we’ll call it WithReactScroll.tsx
. We’ll import the InfiniteScroll
component from the library, and wrap the list of products in it. Configure the component by passing the necessary props like dataLength
, next
, hasMore
, and loader
:
import InfiniteScroll from \\"react-infinite-scroll-component\\";\\nimport { ProductCard } from \\"./ProductCard\\";\\nimport { useState } from \\"react\\";\\n\\nexport const WithReactScroll = ({\\n products,\\n fetchData,\\n totalProducts,\\n}: {\\n products: ProductItem[];\\n fetchData: (page: number) => Promise<void>;\\n totalProducts: number;\\n}) => {\\n const [page, setPage] = useState(<span class=\\"hljs-number\\">1</span>);\\n\\n const handleLoadMoreData = () => {\\n setPage((prevPage) => {\\n const nextPage = prevPage + 1;\\n fetchData(nextPage);\\n return nextPage;\\n });\\n };\\n\\n return (\\n <InfiniteScroll\\n dataLength={products.length}\\n next={handleLoadMoreData}\\n hasMore={totalProducts > products.length}\\n loader={<p>Loading...</p>}\\n endMessage={<p>No more data to load.</p>}\\n >\\n <div className=\\"products-list\\">\\n {products.map((item) => (\\n <ProductCard product={item} key={item.id} />\\n ))}\\n </div>\\n </InfiniteScroll>\\n );\\n};\\n\\n
Now we can import the WithReactScroll
component into our App
component to see the result:
<div>\\n <WithReactScroll\\n products={products}\\n fetchData={fetchData}\\n totalProducts={totalProducts}\\n />\\n</div>\\n\\n
You can view the result on Codesandbox.
\\nWith that, we’ve implemented infinite scrolling in our React application. We didn’t use the window’s scroll
event because react-infinite-scroll-component
handles that for us. The react-infinite-scroll-component
library offers a faster and more streamlined implementation process but still provides customization options, like scroll height and scroll overflow. However, you should keep in mind the trade-off of introducing additional dependencies to your project.
Second on our list is the react-window library, which was designed for rendering large lists efficiently, and the react-window-infinite-loader library, which is used to handle infinite scrolling and load more data as the user scrolls. First, we’ll install the react-window-infinite-loader and react-window library:
\\nnpm install react-window-infinite-loader react-window\\n\\n
Next, we’ll create a WithReactWindow
component in the components
directory:
import { useState } from \\"react\\";\\nimport { FixedSizeList as List } from \\"react-window\\";\\nimport InfiniteLoader from \\"react-window-infinite-loader\\";\\nimport { ProductCard } from \\"./ProductCard\\";\\n\\nexport const WithReactWindow = ({\\n fetchData,\\n products,\\n totalProducts,\\n loading,\\n}: {\\n products: ProductItem[];\\n fetchData: (page: number) => Promise<void>;\\n totalProducts: number;\\n loading: boolean;\\n}) => {\\n const [page, setPage] = useState(<span class=\\"hljs-number\\">1</span>);\\n const hasNextPage = totalProducts > products.length;\\n\\n const handleLoadMoreData = () => {\\n if (loading) return;\\n setPage((prevPage) => {\\n const nextPage = prevPage + 1;\\n fetchData(nextPage);\\n return nextPage;\\n });\\n };\\n\\n const isItemLoaded = (index: number) => !hasNextPage || index < products.length;\\n\\n const Row = ({ index, style }: { index: number, style: { [key:string]:any } }) => {\\n return (\\n <div style={style}>\\n {isItemLoaded(index) ? (\\n <ProductCard product={products[index]} />\\n ) : (\\n \\"Loading...\\"\\n )}\\n </div>\\n );\\n };\\n\\n return (\\n <InfiniteLoader\\n isItemLoaded={isItemLoaded}\\n itemCount={hasNextPage ? products.length + 1 : products.length}\\n loadMoreItems={handleLoadMoreData}\\n >\\n {({ onItemsRendered, ref }) => (\\n <List\\n height={window.innerHeight}\\n itemCount={products.length}\\n itemSize={600}\\n onItemsRendered={onItemsRendered}\\n ref={ref}\\n width={450}\\n >\\n {Row}\\n </List>\\n )}\\n </InfiniteLoader>\\n );\\n};\\n\\n
In the code above, when we combine InfiniteLoader()
and FixedSizeList()
, the component ensures that only the visible items are rendered and new items are loaded as the user scrolls downwards continuously, creating the infinite scroll feature.
Now we can import the component into App
to see the result:
<div>\\n <WithReactWindow\\n products={products}\\n fetchData={fetchData}\\n totalProducts={totalProducts}\\n loading={loading}\\n />\\n</div>\\n\\n
You can also view the result on Codesandbox.
\\nThe Intersection Observer API is a modern development technique that can detect when elements come into view, thereby triggering content loading for infinite scrolling. The Intersection Observer API observes changes in the intersection of target elements with an ancestor element or the viewport, making it well-suited to implement infinite scrolling.
\\nWe’ll create a new component inside the components
directory, we can call it WithIntersectionObserver
. Next, we’ll create a ref
for the observer
target element and set up the Intersection Observer in a useEffect
Hook. When the target element comes into view, call the fetchData
function as follows:
import { useEffect, useRef, useState } from \\"react\\";\\nimport { ProductCard } from \\"./ProductCard\\";\\n\\nexport const WithIntersectionObserver = ({\\n products,\\n fetchData,\\n error,\\n loading\\n}: {\\n products: ProductItem[];\\n fetchData: (page: number) => Promise<void>;\\n error: null|Error,\\n loading: boolean\\n}) => {\\n const [page, setPage] = useState(<span class=\\"hljs-number\\">1</span>);\\n const observerTarget = useRef(null);\\n\\n useEffect(() => {\\n const observer = new IntersectionObserver(\\n (entries) => {\\n if (entries[0].isIntersecting) {\\n setPage((prevPage) => {\\n const nextPage = prevPage + 1;\\n fetchData(nextPage);\\n return nextPage;\\n });\\n }\\n },\\n { threshold: 1 }\\n );\\n if (observerTarget.current) {\\n observer.observe(observerTarget.current);\\n }\\n return () => {\\n if (observerTarget.current) {\\n observer.unobserve(observerTarget.current);\\n }\\n };\\n }, [observerTarget]);\\n\\n <span class=\\"hljs-comment\\">// rest of component</span>\\n};\\n\\n
Then, render the items, loading indicator, error messages, and the observer
target element within the component:
return (\\n <>\\n <div className=\\"products-list\\">\\n {products.map((product) => (\\n <ProductCard product={product} key={product.id} />\\n ))}\\n </div>\\n <div ref={observerTarget}></div>\\n {loading && <p>Loading...</p>}\\n {error && <p>Error: {error.message}</p>}\\n </>\\n);\\n\\n
By leveraging the Intersection Observer API, we have created an efficient and performant infinite scrolling solution in our React application. This approach offers a modern, browser-native method for detecting when elements come into view, but it may not be supported in old browsers and environments without using a polyfill.
\\nNext, we’ll import the new component into App
to see the results:
<span class=\\"hljs-comment\\">//App.tsx</span>\\n\\n<div>\\n <WithIntersectionObserver\\n products={products}\\n fetchData={fetchData}\\n loading={loading}\\n error={error}\\n />\\n</div>\\n\\n
You can also view the results on Codesandbox.
\\nWe’ve explored three different ways to implement infinite scroll in React. Since you’ll likely only need one for your application, choosing the right method depends on your app’s specific requirements. Here are some key considerations to help you decide:
\\nLet’s break down the pros and cons of each method:
\\nThis approach gives you full control over the implementation and allows for unlimited customization. However, it requires more effort to optimize performance (e.g., debouncing scroll events, efficiently updating the DOM) and can be prone to bugs if not implemented carefully.
\\n\\nUse this method if:
\\nLibraries provide a quick and reliable way to implement infinite scroll, often with built-in optimizations and community support. However, they can add bloat to your app, and you’ll rely on the library’s maintainers for updates and bug fixes.
\\nUse this method if:
\\nThis modern approach eliminates the need for scroll event listeners and debouncing, making it more performant and easier to maintain. However, it’s not fully supported in older browsers like Internet Explorer.
\\n\\nUse this method if:
\\nref
and scrollTop
APIScroll to top is an additional functionality often implemented in infinite scrolling that provides a better user experience. If you’ve been on social media anytime recently, you’re familiar with scroll to top.
\\nIn X, for example, when you scroll through your for you page (FYP), it never really ends; this is an example of infinite scrolling in action. Then, when you click the home icon in the X navigation menu, it takes you right back to the top. The home icon in X serves two purposes: to refresh and fetch more data from your FYP and to provide modern scroll to top functionality.
\\nFor a good user experience, I think all implementations of infinite scroll should have the option to scroll back up to the top of the feed. To implement this in React, we will need the scrollTop()
property and the useRef()
Hook to have good control of the scroll position.
Both of them are a strong mix that comes in handy when implementing features like the scroll to top buttons or dynamically loading content as the user scrolls, as seen in the example where we built the entire implementation from scratch. We will implement it in our App.tsx
file:
import React, { useRef } from \'react\';\\n\\nfunction App() {\\n const scrollableDiv = useRef<HTMLDivElement | null>(null);\\n\\n <span class=\\"hljs-comment\\">// scroll logic</span>\\n const scrollToTop = () => {\\n if (scrollableDiv.current) {\\n scrollableDiv.current.scrollTop = 0;\\n }\\n };\\n\\n return (\\n <div ref={scrollableDiv}>\\n <!-- Infinite Scroll content --\x3e\\n <button onClick={scrollToTop}>Scroll to Top</button>\\n </div>\\n );\\n}\\n\\nexport default ScrollableComponent;\\n\\n
In the code above, the scroll to the top is achieved using a ref
and scrollTop()
, which directly accesses and manipulates the scroll position of the div
through the scrollToTop()
function. With this, we can easily improve our user’s infinite scrolling experience.
Infinite scrolling is a powerful web design technique. It enhances the user experience by progressively loading content as users scroll down a page, thereby eliminating the need for pagination. In this article, we explored four different approaches for implementing infinite scrolling in React applications by building an e-commerce products page.
\\nEach technique has its advantages and potential drawbacks, so it’s essential to choose the method that best suits your specific requirements and your users’ needs. By implementing infinite scrolling in your React applications, you can provide an intuitive and engaging user experience that keeps visitors engaged with your content. I hope you enjoyed this article! Be sure to leave a comment if you have any questions.
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nJavaScript’s reputation as a single-threaded language often raises eyebrows. How can it handle asynchronous operations like network requests or timers without freezing the application?
\\nThe answer lies in its runtime architecture, which includes the call stack, Web APIs, task queues (including the microtask queue), and the event loop.
\\nThis article will discuss how JavaScript achieves this seemingly paradoxical feat. We’ll explore the interworking between the call stack, event loop, and various queues that make it all possible while maintaining its single-threaded nature.
\\nJavaScript’s single-threaded nature means it can only execute one task at a time. But how does it keep track of what’s running, what’s next, and where to resume after an interruption? This is where the call stack comes into play.
\\nThe call stack is a data structure that records the execution context of your program. Think of it as a to-do list for JavaScript’s engine.
\\nHere’s how it operates:
\\nLet’s see how the call stack works by dissecting and following the execution path of the simple script below:
\\nfunction logThree() {\\n console.log(‘Three’);\\n}\\n\\nFunction logThreeAndFour() {\\n logThree(); // step 3\\n console.log(‘Four’); // step 4\\n}\\n\\nconsole.log(‘One’); // step 1\\nconsole.log(‘Two’); // step 2\\nlogThreeAndFour(); // step 3-4\\n>\\n
In this walkthrough:
\\nHere’s how the call stack processes the above script:
\\nStep 1: console.log(\'One\')
is pushed onto the stack:
[main(), console.log(\'One\')]
\'One\'
, pops off the stackStep 2: console.log(\'Two\')
is pushed:
[main(), console.log(\'Two\')]
\'Two\'
, pops offStep 3: logThreeAndFour()
is invoked:
[main(), logThreeAndFour()]
logThreeAndFour(), logThree()
is called\\n[main(), logThreeAndFour(), logThree()]
logThree()
calls console.log(\'Three\')
\\n[main(), logThreeAndFour(), logThree(), console.log(\'Three\')]
\'Three\'
, pops off[main(), logThreeAndFour(), logThree()]
→ logThree()
pops offStep 4: console.log(\'Four\')
is pushed
[main(), logThreeAndFour(), console.log(\'Four\')]
\'Four\'
, pops off[main(), logThreeAndFour()]
→ logThreeAndFour()
pops offFinally, the stack is empty, and the program exits:
\\nSince JavaScript has only one call stack, blocking operations (e.g., CPU-heavy loops) freeze the entire application:
\\nfunction longRunningTask() {\\n // Simulate a 3-second delay\\n const start = Date.now();\\n while (Date.now() - start < 3000) {} // Blocks the stack\\n console.log(\'Task done!\');\\n}\\n\\nlongRunningTask(); // Freezes the UI for 3 seconds\\nconsole.log(\'This waits...\'); // Executes after the loop\\n\\n
This limitation is why JavaScript relies on asynchronous operations (e.g., setTimeout
, fetch
) handled by browser APIs outside the call stack.
While the call stack manages synchronous execution, JavaScript’s true power lies in its ability to handle asynchronous operations without blocking the main thread. This is made possible by Web APIs and the task queue, which work in tandem with the event loop to offload and schedule non-blocking tasks.
\\nWeb APIs are browser-provided interfaces that handle tasks outside JavaScript’s core runtime. They include:
\\nsetTimeout
, setInterval
fetch
, XMLHttpRequest
addEventListener
, click
, scroll
These APIs allow JavaScript to delegate time-consuming operations to the browser’s multi-threaded environment, freeing the call stack to process other tasks.
\\nLet’s break down a setTimeout
example:
console.log(\'Start\');\\n\\nsetTimeout(() => { \\n console.log(\'Timeout callback\'); \\n}, 1000); \\n\\nconsole.log(\'End\'); \\n\\n
Here’s the execution flow of the above snippet:
\\nconsole.log(\'Start\')
executes and pops offsetTimeout()
registers the callback with the browser’s timer API and pops offconsole.log(\'End\')
executes and pops off() => { console.log(...) }
is added to the task queueconsole.log(\'Timeout callback\')
executes:Start\\nEnd\\nTimeout callback\\n
Note that timer delays are minimum guarantees, meaning a setTimeout(callback, 1000)
callback might execute after 1,000ms, but never before. If the call stack is busy (e.g., with a long-running loop), the callback waits in the task queue.
Let’s see another example using the Geolocation API:
\\nconsole.log(\'Requesting location...\');\\n\\nnavigator.geolocation.getCurrentPosition( \\n (position) => { console.log(position); }, // Success callback \\n (error) => { console.error(error); } // Error callback \\n);\\n\\nconsole.log(\'Waiting for user permission...\');\\n\\n
In the above snippet, getCurrentPosition
registers the callbacks with the browser’s geolocation API. Then the browser handles permission prompts and GPS data fetching. Once the user responds, the relevant callback joins the task queue. This allows the event loop to transfer it to the call stack when idle:
Requesting location... \\nWaiting for user permission... \\n{ coords: ... } // After user grants permission \\n\\n
Without Web APIs and the task queue, the call stack would freeze during network requests, timers, or user interactions.
\\nWhile the task queue handles callback-based APIs like [setTimeout]
, JavaScript’s modern asynchronous features (promises, async/await
) rely on the microtask queue. Understanding how the event loop prioritizes this queue is key to mastering JavaScript’s execution order.
The microtask queue is a dedicated queue for:
\\n.then()
, .catch()
, .finally()
handlersqueueMicrotask()
— Explicitly adds a function to the microtask queueasync/await
— A function call after await
is queued as a microtaskUnlike the task queue, the microtask queue has higher priority. The event loop processes all microtasks before moving to tasks.
\\n\\nThe event loop follows a strict sequence of workflow. It executes all tasks in the call stack, drains the microtask queue completely, renders UI updates (if any), and then processes one task from the task queue before repeating the entire process again, continuously. This ensures promise-based code runs as soon as possible, even if tasks are scheduled earlier.
\\nLet’s see an example of microtasks vs tasks queues:
\\nconsole.log(\'Start\');\\n\\n// Task (setTimeout)\\nsetTimeout(() => console.log(\'Timeout\'), 0);\\n\\n// Microtask (Promise)\\nPromise.resolve().then(() => console.log(\'Promise\'));\\n\\nconsole.log(\'End\');\\n\\n
This produces the following output:
\\nStart \\nEnd \\nPromise \\nTimeout \\n\\n
The breakdown execution follows the following sequence:
\\nconsole.log(\'Start\')
executessetTimeout
schedules its callback in the task queuePromise.resolve().then()
schedules its callback in the microtask queueconsole.log(\'End\')
executesPromise
logsTimeout
logsBefore continuing, it is worth mentioning a caveat that exists with microtasks. This has to do with nested microtasks; microtasks can schedule more microtasks, potentially blocking the event loop like below:
\\nfunction recursiveMicrotask() {\\n Promise.resolve().then(() => {\\n console.log(\'Microtask!\');\\n recursiveMicrotask(); // Infinite loop\\n });\\n}\\n\\nrecursiveMicrotask();\\n\\n
The above script will hang as the microtask queue is never empty, and the approach to fix this issue is to use setTimeout
to defer work to the task queue.
async/await
and microtasksasync/await
syntax is syntactic sugar for promises. Code after await
is wrapped in a microtask:
async function fetchData() {\\n console.log(\'Fetching...\');\\n const response = await fetch(\'/data\'); // Pauses here\\n console.log(\'Data received\'); // Queued as microtask\\n}\\n\\nfetchData();\\nconsole.log(\'Script continues\');\\n\\n
The output is as follows:
\\nFetching... \\nScript continues \\nData received \\n\\n
JavaScript’s single-threaded model ensures simplicity but struggles with CPU-heavy tasks like image processing, and complex or large dataset calculations. These tasks can freeze the UI, creating a poor user experience. Web Workers solve this by executing scripts in separate background threads, freeing the main thread to handle the DOM and user interactions.
\\nWorkers run in an isolated environment with their own memory space. They cannot access the DOM or window
object, ensuring thread safety. Communication between the main thread and workers happens via message passing, where data is copied (via structured cloning) or transferred (using Transferable
objects) to avoid shared memory conflicts.
The below code block shows a sample of delegating a complex work of processing an image that’s assumed in this scenario to take a lot of computational time to complete. worker.postMessage
method sends a message to the worker. It then utilizes the worker.onmessage
and worker.onerror
to handle the success and error of the background work.
Here’s the main thread:
\\n// Create a worker and send data\\nconst worker = new Worker(\'worker.js\');\\nworker.postMessage({ task: \'processImage\', imageData: rawPixels }); \\n\\n// Listen for results or errors\\nworker.onmessage = (event) => {\\n displayProcessedImage(event.data); // Handle result\\n};\\n\\nworker.onerror = (error) => {\\n console.error(\'Worker error:\', error); // Handle failures\\n};\\n\\n
In the below code snippet, we utilize the onmessage
method to receive the notification to start processing the image. The rawPixel
passed down can be accessed on the event
object through the data
field as below.
And now we see it from the worker.js viewpoint:
\\n// Receive and process data\\nself.onmessage = (event) => {\\n const processedData = heavyComputation(event.data.imageData); \\n self.postMessage(processedData); // Return result\\n};\\n\\n
Workers operate in a separate global scope, hence the use of self
. Use Transferable
objects (e.g., ArrayBuffer
) for large data to avoid costly copying, and spawning too many workers can bloat memory; reuse them for recurring tasks.
JavaScript’s asynchronous prowess lies in its elegant orchestration of the call stack, Web APIs, and the event loop—a system that enables non-blocking execution despite its single-threaded nature. By leveraging the task queue for callback-based operations and prioritizing the microtask queue for promises, JavaScript ensures efficient handling of asynchronous workflows.
\\n\\nBy mastering these concepts, you’ll write code that’s not just functional but predictable and performant, whether you’re handling user interactions, fetching data, or optimizing rendering.
\\nExperiment with DevTools, embrace asynchronous patterns, and let JavaScript’s event loop work for you—not against you.
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nEditor’s note: This post was updated by Isaac Okoro on February 18, 2025, to reflect Framer Motions’ rebranding to Motion, and include information on the latest features and best practices available in Motion.
\\nMotion — the tool formerly known as Framer Motion — can help JavaScript developers quickly and effectively write animations. Motion makes it easy to add intuitive animations to your web applications with minimal code.
\\nAnimation in React, and on the web at large, is the process of changing the visual state of the UI elements on a page over time.
\\nWhat do I mean by visual state? Any property of the element that influences how it looks: height, shape, position relative to other elements, etc. The core idea of animation is that you’re changing some visible property of something on the page over time.
\\nThere are a few ways to create animations in React, but all of them fall into two broad categories: CSS animations, which change visual state by applying CSS rules; and, JavaScript animations, which use JavaScript to change the properties of the element.
\\nIn either of those categories, you can implement the animation from scratch or use a library. On the CSS side, you can compose your animations with CSS rules, or you can use third-party libraries like Animate.css.
\\nIf you choose to use JavaScript, you can either write custom code to create animations or use libraries like GSAP or Framer Motion.
\\nEach library has its advantages, and each has a different approach to writing animations. In this article, we’ll explore Motion (formerly known as Framer Motion), a React animation library created and maintained by the Framer design team.
\\nYou’ll learn the core components that underpin all Motion animations, dive into some of the features that make Motion a great tool, discover best practices for getting the most out of the library, and put it all into practice with a step-by-step example: building a task tracker.
\\nMotion is a fairly popular and actively maintained library, with over 27k stars on GitHub, and plenty of resources to support it.
\\nBut most importantly, Motion is built around allowing you to write complex, production-grade animations with as little code as possible. Using Motion is so convenient that you can implement drag-and-drop by adding a single line of code! Motion also greatly simplifies tasks like SVG animation and animating layout shifts.
\\nMotion has an intuitive approach to animation. It provides a set of components that wrap your markup and accept props to allow you to specify what type of animation you want. The core components of Motion are:
\\nmotion
componentAnimatePresence
componentLazyMotion
componentLayoutGroup
componentMotionConfig
componentReorder
componentAnimateNumber
component (exclusive to only premium users)Cursor
component (exclusive to only premium users)Let’s take a brief look into these components and understand what they do.
\\nmotion
componentThe motion
component provides the foundation of all animation. It wraps the HTML elements in your React components and animates those elements with state passed to its initial
and animate
props. Below is an example. Take a plain div you might find anywhere on the web:
<div>I have some content here</div>\\n\\n
Let’s assume you wanted this div
to fade into the page when it loads. This code is all you need:
<motion.div\\n initial={{ opacity:0 }}\\n animate={{ opacity:1 }}\\n>\\n I have some content in here \\n</motion.div>\\n\\n
When the page loads, the div
will animate smoothly from transparency to full opacity, gradually fading into the page. In general, when the motion component is mounted, the values specified in the initial
prop are applied to the component, and then the component is animated until it reaches the values specified in the animate
prop.
AnimatePresence
componentAnimatePresence
works with motion
and is necessary to allow elements you remove from the DOM to show exit animations before they’re removed from the page. AnimatePresence
only works on its direct children that fulfill one of two conditions:
motion
componentmotion
component as one of its childrenThe desired exit animation has to be specified by adding the exit
prop to motion
. Here’s an example of AnimatePresence
at work:
<AnimatePresence>\\n <motion.div\\n exit={{ x: \\"-100vh\\", opacity: 0 }}\\n >\\n Watch me go woosh!\\n </motion.div>\\n</AnimatePresence>\\n\\n
When the div wrapped by AnimatePresence
is removed from the DOM, it will slide 100vh to the left (instead of just disappearing), fading into transparency as it does so. Only after that will the div be removed from the page. Note that when multiple components are direct children of AnimatePresence
, they each need to have a key
prop with a unique value so AnimatePresence
can keep track of them in the DOM.
LazyMotion
componentMotion’s motion
component comes with all features bundled, resulting in a bundle size of approximately 34kb. However, with LazyMotion
and the m
component, we can reduce this size to six kb for the initial render and then load specific features either synchronously or asynchronously:
import { LazyMotion, domAnimations } from \\"motion/react\\"\\nimport * as m from \\"motion/react-m\\"\\n\\nexport const MyComponent = ({ isVisible }) => (\\n <LazyMotion features={domAnimations}>\\n <m.div animate={{ opacity: 1 }} />\\n </LazyMotion>\\n)\\n\\n
The LazyMotion
component provides the features
prop that is responsible for loading animation bundles. This component also allows for optimization of the Motion library. When installed, the Motion Library comes with a bundle size of around 34kb. The LazyMotion
component allows us to reduce this to six kb for the initial render and then load a subset of features synchronously or asynchronously.
LayoutGroup
componentThe LayoutGroup
component groups motion
components that need to be aware of each other’s state and layout changes, ensuring smooth animations across dynamic UI elements. This can be seen in the example below:
import { LayoutGroup } from \\"motion/react\\"\\n\\nfunction Accordion() {\\n return (\\n <LayoutGroup>\\n <ToggleContent />\\n <ToggleContent />\\n </LayoutGroup> \\n )\\n}\\n\\n
MotionConfig
componentThe MotionConfig
component allows setting default animation configurations for all child motion
components, thus improving consistency across animations and simplifying configuration management:
import { motion, MotionConfig } from \\"motion/react\\"\\n\\nexport const CustomComponent = () => (\\n <MotionConfig transition={{ duration: 2 }}>\\n <motion.div initial={{ opacity: 0 }} animate={{ opacity: 1 }} />\\n </MotionConfig>\\n)\\n\\n
The MotionConfig
component houses three props:
transition
— Define a default transition for all child componentsreducedMotion
— Control reduced motion settings (\\"user\\"
, \\"always\\"
, \\"never\\"
)nonce
— Apply a CSP nonce for security complianceReorder
componentThe Reorder
components create smooth drag-to-reorder lists, such as sortable tabs, tasks, and to-dos:
import { Reorder } from \\"motion\\"\\n\\nconst [items, setItems] = useState([0, 1, 2, 3])\\n\\nreturn (\\n <Reorder.Group axis=\\"y\\" values={items} onReorder={setItems}>\\n {items.map((item) => (\\n <Reorder.Item key={item} value={item}>\\n {item}\\n </Reorder.Item>\\n ))}\\n </Reorder.Group>\\n)\\n\\n
The Reorder
component has two types: the Reorder.Group
, which serves as a wrapper component to the Reorder.Item
, which takes in a key
for unique identification and a value
.
Let’s apply everything we’ve learned to a more complex example. At the end of this article, you’ll have built an animated task tracker that looks like this:
\\nStart by navigating to the directory where you want the example to live. Next, open your terminal and create a starter React app using Vite with this command:
\\nnpm create vite@latest\\n\\n
Then, answer the prompts like this:
\\nNext, we’ll add Tailwind CSS and Lucide icon library to our project. To do this, first change the directory into the project directory and add the Tailwind CSS dependencies:
\\ncd task-tracker && yarn add lucide-react tailwindcss @tailwindcss/vite \\n\\n
After running the command above, open the project in your preferred code editor. Your project architecture should now look like this:
\\nDelete the src/assets
folder and App.css
. Now, write the code for the task tracker without any animation. Start with the project’s CSS by replacing the contents of index.css
with the Tailwind CSS import as shown below:
@import \\"tailwindcss\\";\\n\\n
Now go to the vite.config.js
file and update it with the code below:
import tailwindcss from \\"@tailwindcss/vite\\";\\nimport react from \\"@vitejs/plugin-react\\";\\nimport { defineConfig } from \\"vite\\";\\n\\nexport default defineConfig({\\n plugins: [react(), tailwindcss()],\\n});\\n\\n
Create a folder called components
in the src
folder and add the following files to it: AddTask.jsx
, Task.jsx
, and TaskList.jsx
. Your folder should look exactly like the one below:
Next comes the code for the newly created file. In the Task.jsx
file, add the code below:
import { GripVertical, Trash2 } from \'lucide-react\';\\n\\nconst Task = ({ task, onDelete }) => {\\n if (!task) return null;\\n const taskText = task.text || \'Untitled Task\';\\n\\n return (\\n <div className=\\"flex items-center gap-4 bg-white rounded-lg p-4 shadow-sm border border-gray-100 w-full\\"> \\n <button\\n className=\\"p-1 rounded hover:bg-gray-100 text-gray-400 hover:text-gray-600 cursor-grab active:cursor-grabbing\\"\\n >\\n <GripVertical size={20} />\\n </button>\\n\\n <span className=\\"flex-1 text-gray-700\\">{taskText}</span>\\n\\n <button\\n onClick={() => onDelete(task.id)}\\n className=\\"p-1 rounded hover:bg-red-50 text-red-400 hover:text-red-600 transition-colors\\"\\n disabled={!task.id}\\n >\\n <Trash2 size={20} />\\n </button>\\n </div>\\n );\\n};\\n\\nexport default Task;\\n\\n
For brevity, we won’t go over the starter code in detail. Essentially it does a few things:
\\nNext, update the AddTask.jsx
file with the code below:
import { Plus } from \'lucide-react\';\\n\\nconst AddTask = ({ newTask, setNewTask, onSubmit }) => {\\n return (\\n <form onSubmit={onSubmit} className=\\"w-full mb-6\\">\\n <div className=\\"flex gap-2\\">\\n <input\\n type=\\"text\\"\\n value={newTask}\\n onChange={(e) => setNewTask(e.target.value)}\\n placeholder=\\"Add a new task...\\"\\n className=\\"flex-1 p-3 rounded-lg border border-gray-200 focus:outline-none focus:ring-2 focus:ring-blue-500 focus:border-transparent\\"\\n />\\n <button\\n type=\\"submit\\"\\n className=\\"bg-blue-500 text-white p-3 rounded-lg hover:bg-blue-600 transition-colors\\"\\n >\\n <Plus size={24} />\\n </button>\\n </div>\\n </form>\\n );\\n};\\n\\nexport default AddTask;\\n\\n
The code above provides a simple and visually appealing form for adding new tasks. It also handles user inputs, updates the newTask
state, and triggers the onSubmit
function.
Finally, update the TaskList.jsx
file with the code block below:
import Task from \'./Task\';\\n\\nconst TaskList = ({ tasks = [], onDelete }) => {\\n const taskArray = Array.isArray(tasks) ? tasks : [];\\n\\n return (\\n <div className=\\"flex flex-col gap-3 w-full\\">\\n {taskArray.map((task) => (\\n <Task\\n key={task.id || Date.now()}\\n task={task}\\n onDelete={onDelete}\\n />\\n ))}\\n </div>\\n );\\n};\\n\\nexport default TaskList;\\n\\n
The code block above maps through the tasks array and dynamically renders them in a list of individual tasks. It also handles potential errors and deleting of tasks through the onDelete
prop.
Now that we have all our components together, let’s update our App.jsx
file and try it out:
import { useState } from \'react\';\\nimport AddTask from \'./components/AddTask\';\\nimport TaskList from \'./components/TaskList\';\\n\\nconst App = () => {\\n const [tasks, setTasks] = useState([\\n { id: Date.now(), text: \\"Learn React\\" },\\n { id: Date.now() + 1, text: \\"Build a Task Tracker\\" },\\n { id: Date.now() + 2, text: \\"Add Tasks\\" }\\n ]);\\n\\n const [newTask, setNewTask] = useState(\\"\\");\\n\\n const handleAddTask = (e) => {\\n e.preventDefault();\\n if (!newTask.trim()) return;\\n\\n setTasks([...tasks, { id: Date.now(), text: newTask.trim() }]);\\n setNewTask(\\"\\");\\n };\\n\\n const handleDelete = (id) => {\\n setTasks(tasks.filter(task => task.id !== id));\\n };\\n return (\\n <div className=\\"min-h-screen bg-gray-50 p-4 sm:p-6 md:p-8\\">\\n <div className=\\"max-w-2xl mx-auto bg-white rounded-xl shadow-lg p-6 sm:p-8\\">\\n <h1 className=\\"text-2xl font-bold text-gray-900 mb-6\\">\\n Task Tracker\\n </h1>\\n <AddTask\\n newTask={newTask}\\n setNewTask={setNewTask}\\n onSubmit={handleAddTask}\\n />\\n {tasks.length === 0 ? (\\n <p className=\\"text-center text-gray-500 py-8\\">\\n No tasks yet. Add one above!\\n </p>\\n ) : (\\n <TaskList\\n tasks={tasks}\\n setTasks={setTasks}\\n onDelete={handleDelete}\\n />\\n )}\\n </div>\\n </div>\\n );\\n};\\nexport default App;\\n\\n
The code above manages the overall state and rendering of the imported components. It also provides helper functions for adding and deleting tasks.
\\nRun the code using the command below to see a preview of what we’ve built:
\\nyarn dev\\n\\n
From our preview above, we can observe that we have no animation involved in the user interface.
\\nWe’ll start by adding the Motion library to our project. Run the command below to add the Motion animation library:
\\nyarn add motion\\n\\n
Next, let’s update the Task.jsx
file by importing some components from Motion and using them:
Then, update the div
with the Reorder.Item
component:
const Task = ({ task, onDelete }) => {\\n const controls = useDragControls();\\n\\n if (!task) return null;\\n const taskText = task.text || \'Untitled Task\';\\n\\n return (\\n <Reorder.Item\\n value={task}\\n dragListener={false}\\n dragControls={controls}\\n className=\\"flex items-center gap-4 bg-white rounded-lg p-4 shadow-sm border border-gray-100 w-full\\"\\n initial={{ opacity: 0, y: -20 }}\\n animate={{ opacity: 1, y: 0 }}\\n exit={{ opacity: 0, y: 20 }}\\n whileHover={{ scale: 1.02 }}\\n layout\\n >\\n <button\\n className=\\"p-1 rounded hover:bg-gray-100 text-gray-400 hover:text-gray-600 cursor-grab active:cursor-grabbing\\"\\n onPointerDown={(e) => controls.start(e)}\\n >\\n <GripVertical size={20} />\\n </button>\\n\\n ...\\n </Reorder.Item>\\n );\\n};\\n\\n
Let’s go through the changes we made to the Task
component below:
We wrapped the component in <Reorder.Item>
, which is a Motion component for handling drag-to-reorder functionality. The dragListerner
prop disables default drag behavior while the dragControls
prop links the custom drag controller.
We also specify various props for the <Reorder.Item>
component:
initial
prop — The starting state for when the component mountsanimate
prop — Defines the final state after the component mountexit
prop — For defining how the component animates while being unmountedwhileHover
prop — Scales the component when hovered onNext, apply the following changes to the TaskList.jsx
file. First, import the AnimatePresence
, Reorder
from the Motion library:
import { AnimatePresence, Reorder } from \'motion/react\';\\n\\n
Then, update the div
with to the Reorder.Group
component:
return (\\n <Reorder.Group\\n axis=\\"y\\"\\n values={taskArray}\\n onReorder={setTasks}\\n className=\\"flex flex-col gap-3 w-full\\"\\n layoutScroll\\n >\\n <AnimatePresence mode=\\"popLayout\\">\\n {taskArray.map((task) => (\\n <Task\\n key={task.id || Date.now()}\\n task={task}\\n onDelete={onDelete}\\n />\\n ))}\\n </AnimatePresence>\\n </Reorder.Group>\\n);\\n\\n
In the code block above, The Reorder.Group
component serves as a container for all draggable tasks and also permits vertical movement using the axis=\\"y\\"
prop.
When a task is reordered, the onReorder
prop receives the new array order and updates the state via setTasks
.
The layoutScroll
prop ensures sleek animations even when the list is inside a scrollable container.
When set to mode=\\"popLayout\\"
, the AnimatePresence
component handles the coordinates of the exit animations of removed tasks with the layout adjustments of the remaining tasks.
Next, apply the following changes to the AddTask.jsx
file. First, import the lazy m
component from the Motion library:
import * as m from \\"motion/react-m\\";\\n\\n
Then update the <button>
component:
<m.button\\nwhileHover={{\\n rotateZ: [0, -20, 20, -20, 20, -20, 20, 0],\\n transition: { duration: 0.5 },\\n}}\\n\\n type=\\"submit\\"\\n className=\\"bg-blue-500 text-white p-3 rounded-lg hover:bg-blue-600 transition-colors\\"\\n>\\n <Plus size={24} />\\n</m.button>\\n\\n
The code above uses Motion’s m
component, which works with the LazyMotion
. This component is bundle-friendly because it reduces the bundle size to less than six kb for initial rendering and syncing of subset features.
On mouse hover on the button, Motion uses the whileHover
prop to perform a series of rotations based on our specifications to the rotateZ
array to create a shaking or wiggling effect. We also ensured the entire animation happens in only half a second (500 milliseconds) by specifying the transition
property with duration: 0.5
.
Finally, import the following components from the Motion library into your App.jsx
:
import { LayoutGroup, LazyMotion, MotionConfig, domAnimation } from \'motion/react\';\\nimport * as m from \\"motion/react-m\\"\\n\\n
Then update the rest of the code to look like this.
\\nreturn (\\n <LazyMotion features={domAnimation}>\\n <MotionConfig\\n transition={{ duration: 0.2 }}\\n reducedMotion=\\"user\\"\\n >\\n <div className=\\"min-h-screen bg-gray-50 p-4 sm:p-6 md:p-8\\">\\n <div className=\\"max-w-2xl mx-auto bg-white rounded-xl shadow-lg p-6 sm:p-8\\">\\n <LayoutGroup>\\n <m.h1\\n className=\\"text-2xl font-bold text-gray-900 mb-6\\"\\n initial={{ opacity: 0 }}\\n animate={{ opacity: 1 }}\\n transition={{ duration: 1 }}\\n >\\n Task Tracker\\n </m.h1>\\n <AddTask\\n newTask={newTask}\\n setNewTask={setNewTask}\\n onSubmit={handleAddTask}\\n />\\n {tasks.length === 0 ? (\\n <m.p\\n className=\\"text-center text-gray-500 py-8\\"\\n initial={{ opacity: 0 }}\\n animate={{ opacity: 1 }}\\n >\\n No tasks yet. Add one above!\\n </m.p>\\n ) : (\\n <TaskList\\n tasks={tasks}\\n setTasks={setTasks}\\n onDelete={handleDelete}\\n />\\n )}\\n </LayoutGroup>\\n </div>\\n </div>\\n </MotionConfig>\\n </LazyMotion>\\n );\\n\\n
In the code block above, we’re utilizing Motion’s LazyMotion
component with domAnimation
features to optimize loading and MotionConfig
with duration: 0.2
and reducedMotion: \\"user\\"
to establish global animation settings.
Within a LayoutGroup
that coordinates animations, two Motion components (m.h1
and m.p
) demonstrate fade-in animations, both starting with initial={{ opacity: 0 }}
and animating to animate={{ opacity: 1 }}
.
The conditional rendering switches between showing a TaskList
component or an animated empty state message, both benefiting from the same fade-in animation properties:
Motion also makes animating SVGs a breeze by allowing you to animate the pathLength
, pathSpacing
, and pathOffset
properties of those SVGs. Here’s an example that uses the same bell icon we used in our header bar.
Animate the pathLength
of an SVG. In the App.jsx
file, wrap the h1
with a span
tag and add the svg
code as shown below:
<span className=\'flex\'>\\n <m.svg\\n initial={{ pathLength: 0 }}\\n animate={{ pathLength: 1 }}\\n xmlns=\\"http://www.w3.org/2000/svg\\"\\n fill=\\"none\\"\\n viewBox=\\"0 0 24 24\\"\\n strokeWidth=\\"1.5\\"\\n stroke=\\"currentColor\\"\\n className=\\"notification__icon max-h-[30px] pr-1.5\\"\\n >\\n <m.path\\n initial={{ pathLength: 0 }}\\n animate={{ pathLength: 1 }}\\n transition={{ duration: 2 }}\\n height={10}\\n strokeLinecap=\\"round\\"\\n strokeLinejoin=\\"round\\"\\n d=\\"M14.857 17.082a23.848 23.848 0 005.454-1.31A8.967 8.967 0 0118 9.75v-.7V9A6 6 0 006 9v.75a8.967 8.967 0 01-2.312 6.022c1.733.64 3.56 1.085 5.455 1.31m5.714 0a24.255 24.255 0 01-5.714 0m5.714 0a3 3 0 11-5.714 0M3.124 7.5A8.969 8.969 0 015.292 3m13.416 0a8.969 8.969 0 012.168 4.5\\"\\n />\\n </m.svg>\\n\\n <m.h1\\n className=\\"text-2xl font-bold text-gray-900 mb-6\\"\\n initial={{ opacity: 0 }}\\n animate={{ opacity: 1 }}\\n transition={{ duration: 1 }}\\n >\\n Task Tracker\\n </m.h1>\\n</span>\\n\\n
On page load, the SVG should animate like this:
\\nJust like everything else, Motion makes implementing drag-and-drop easy. To make an element draggable, first wrap it with a Motion component, and then add the drag
prop. That’s all you need. As an example, you go from this:
<div>Drag me around!</div>\\n\\n
To this:
\\n<motion.div drag>Drag me around</motion.div>\\n\\n
And that’s it! Doing this will let you drag the div
anywhere, including off the screen. That’s why Motion provides extra props like dragConstraint
to help you limit the range a component can be dragged to, and dragElastic
to moderate how elastic the boundary is.
dragConstraint
accepts either an object with values for top, left, right, and bottom, or a ref to another DOM object. The value of dragElastic
ranges from zero (meaning the boundaries aren’t elastic at all) to one, where the boundaries are as elastic as possible. Here’s an example:
<motion.div\\n drag\\n dragConstraints={{\\n top: -50,\\n left: -50,\\n right: 50,\\n bottom: 50,\\n }}\\n dragElastic={0.3}\\n>\\n Drag me around\\n</motion.div>\\n\\n
Now, the div can only be dragged within a 50px region in any direction, and the region’s boundaries are slightly elastic.
\\nMotion is a great tool, but like any tool, it can be misused. Here are a few rules of thumb to optimize performance when using Motion:
\\nLazyMotion
, domAnimations
and lazy m
for your global settings.layoutGroup
component and layoutId
prop for shared element animationstransform
and opacity
properties whenever possibleFollowing these practices will help you optimize performance and create smoother animations with Motion in React
\\nLet’s see how Motion compares to various alternative libraries like React Spring and Anime.js. We will compare using metrics like automatic layout animations, built-in gesture support, and accessibility features:
\\nFeatures | \\nMotion | \\nReact Spring | \\nAnime.js | \\n
---|---|---|---|
Automatic layout animation | \\nComes with built-in layout animations with a single layout prop that handles everything | \\nManual calculations needed when creating layout animations | \\nHas no built-in layout system. Better suited for handling keyframe animation | \\n
Built-in gesture support | \\nProvides native drag, hover, and tap support with seamless animations | \\nRequires additional libraries for gesture support | \\nHas no built-in gesture support | \\n
Accessibility features | \\nAllows for easy accessibility configurations via MotionConfig, which allows you to set a site-wide policy for respecting the user’s device settings | \\nRequires custom solutions for accessibility | \\nNo built-in accessibility features | \\n
Syntax | \\nComponent-based approach with a React-like API | \\nImperative approach with a Hook-based API | \\nUses imperative Javascript API | \\n
SVG animations | \\nComes with native SVG support and path-drawing animations | \\nBasic SVG support and manual path animation | \\nExcellent SVG support with advanced path animations and morphing | \\n
Motion is a popular, well-supported JavaScript-based animation library for React applications that simplifies the process of implementing complex animations. In this article, we used Motion to build an animated task tracker, animate an SVG, and implement drag-and-drop.
\\nI hope you enjoyed this article. Happy coding!
Hey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nuseContext()
Hook\\n Editor’s note: This article was last updated in February 2025 by Vijit Ail to add use cases and detailed examples that align with the latest React 19 updates, expand commentary on Redux vs Context, and remove outdated information related to the class
component.
You’ve likely encountered situations where passing data through many components becomes cumbersome. That’s where React Context comes into the picture.
\\nReact Context was introduced in React v.16.3. It enables us to pass data through our component trees, allowing our components to communicate and share data at various levels. This guide will explore everything you need to know about using Context effectively. Let’s dive right into it.
\\nPassing props through each intermediate component can be tedious and make your code harder to maintain. That’s why React Context was introduced.
\\nReact Context is a great feature that enables you to manage and share state across the React application without needing to pass props through every level of the component tree. It is quite handy when you have a deeply nested component structure, and you need to pass specific data from a top-level component down to a deeply nested child component.
\\ncreateContext()
function. This creates a special object that stores the state that you want to share<Context />
component to the top of the component tree that needs access to the shared state<Context>
component can access the shared data using the useContext()
Hook or the <Context.Consumer />
componentuseContext()
HookThe useContext()
Hook in React is a useful function that enables components to access shared data easily without having to pass down props through the component tree. It can read and subscribe to a context
directly from any component.
Here’s a basic usage for useContext()
:
// Assume MyContext is created somewhere in your app\\n\\nconst MyComponent = () => {\\n const contextValue = useContext(MyContext);\\n // you can use contextValue anywhere in this component\\n}\\n\\n
By calling, useContext(MyContext)
, you get the current value from the nearest <MyContext />
provider above your component in the tree. If no provider is found, the useContext()
Hook returns the default value defined when you created MyContext
.
Components using useContext()
automatically re-render whenever the context value changes, making sure that your UI is always up to date with the latest context value.
When working with React, there are plenty of scenarios where Context can make your life much easier.
\\nLet’s consider, that you are working on an app that supports both light and dark modes. Instead of passing the theme prop through every level of the component tree, you can wrap a Context
component at the top of the app, generally in the entry component. This way, any component can access the current theme state directly from the Context
and change its styling accordingly.
In certain cases, components need to know who the current user is. By storing the user information in Context
, any component can access it without the need for prop drilling. The user name can be displayed in the top navigation and in the profile section with the use of Context
.
Popular routing libraries like react-router
and wouter
use Context
under the hood to keep track of the current routing state. This enables the app to know which route is currently active, and render the route component accordingly.
As your app continues to grow, managing data flow across the application can get tedious. Context
helps by lifting the state to a parent component, making it accessible to any component that needs it. Often, developers pair Context
with a reducer to manage complex state logic, which simplifies the code and makes the app maintainable in the long run.
Let’s explore some uses of React Context with, well, some real-world context:
\\nLet’s see a simple implementation of how we can manage light and dark themes using React Context
.
First, we need to create a context that will hold the value of the active theme and an updater function that will toggle it:
\\nconst ThemeContext = createContext();\\n\\n
Next, we will create a <ThemeProvider />
component that wraps our entire app and provides the theme context to all child components:
const ThemeProvider = ({ children }) => {\\n const [theme, setTheme] = useState(\\"light\\");\\n\\n const toggleTheme = () => {\\n setTheme((prevTheme) => (prevTheme === \\"light\\" ? \\"dark\\" : \\"light\\"));\\n };\\n\\n return (\\n <ThemeContext value={{ theme, toggleTheme }}>\\n <div className={`app-theme-${theme}`}>{children}</div>\\n </ThemeContext>\\n );\\n};\\n\\n
In the above code, we are using the <ThemeContext />
component to make the theme
and toggleTheme()
available to any component that consumes the context.
Now, create a <ThemeSwitcher />
component that will provide a button for the users to toggle between the themes. We use the useContext()
Hook to access the theme
and toggleTheme()
provided by the <ThemeProvider />
component:
const ThemeSwitcher = () => {\\n const { theme, toggleTheme } = useContext(ThemeContext);\\n\\n return (\\n <button onClick={toggleTheme}>\\n Switch to {theme === \\"light\\" ? \\"dark\\" : \\"light\\"} mode\\n </button>\\n );\\n};\\n\\n
Let’s create a <Header />
component to display the app title and the <ThemeSwitcher />
component:
const Header = () => (\\n <header>\\n <h1>My App</h1>\\n <ThemeSwitcher />\\n </header>\\n);\\n\\n
And finally, we wrap the <Main />
component with the <ThemeProvider />
so that all child components have access to the theme
context:
const Main = () => {\\n const { theme } = useContext(ThemeContext);\\n\\n return (\\n <div className={theme}>\\n <Header />\\n <main>\\n <p>Hello World!</p>\\n </main>\\n </div>\\n );\\n};\\n\\nexport default function App() {\\n return (\\n <ThemeProvider>\\n <Main />\\n </ThemeProvider>\\n );\\n}\\n\\n
Another common usage of React Context, is to display toast messages. Let’s explore how React Context helps in displaying toast messages from different components.
\\nSimilar to above, we first need to create a context that will manage our toast messages:
\\n// ToastContext.jsx\\nimport React, { createContext, useState, useContext } from \\"react\\";\\n\\nconst ToastContext = createContext();\\n\\nexport const ToastProvider = ({ children }) => {\\n const [toasts, setToasts] = useState([]);\\n\\n const addToast = (message) => {\\n const id = Date.now();\\n setToasts([...toasts, { id, message }]);\\n\\n setTimeout(() => {\\n setToasts((currentToasts) =>\\n currentToasts.filter((toast) => toast.id !== id)\\n );\\n }, 3000);\\n };\\n\\n return (\\n <ToastContext value={{ addToast }}>\\n {children}\\n <div className=\\"toast-container\\">\\n {toasts.map((toast) => (\\n <div key={toast.id} className=\\"toast\\">\\n {toast.message}\\n </div>\\n ))}\\n </div>\\n </ToastContext>\\n );\\n};\\n\\nexport const useToast = () => useContext(ToastContext);\\n\\n
The <ToastProvider />
component manages the state of toasts. It renders all the toasts in a fixed container. It also provides an addToast()
function to display new toast messages and remove them automatically after three seconds using the setTimeout()
method.
In the following code snippet, there are multiple child components like <Navbar />
, <Profile />
, <Home />
that use the addToast()
function to trigger the toast messages:
// App.js\\nimport { ToastProvider, useToast } from \\"./ToastContext\\";\\nimport \\"./styles.css\\";\\n\\n// Navbar Component\\nconst Navbar = () => {\\n const { addToast } = useToast();\\n\\n const handleLogout = () => {\\n addToast(\\"You have been logged out.\\");\\n };\\n\\n return (\\n <nav>\\n <h1>Toast Example</h1>\\n <button onClick={handleLogout}>Logout</button>\\n </nav>\\n );\\n};\\n\\n// Home Component\\nconst Home = () => {\\n const { addToast } = useToast();\\n\\n const handleClick = () => {\\n addToast(\\"Welcome to the Home Page!\\");\\n };\\n\\n return (\\n <div>\\n <h2>Home</h2>\\n <button onClick={handleClick}>Show Home Toast</button>\\n </div>\\n );\\n};\\n\\n// Profile Component\\nconst Profile = () => {\\n const { addToast } = useToast();\\n\\n const handleUpdate = () => {\\n addToast(\\"Profile updated successfully!\\");\\n };\\n\\n return (\\n <div>\\n <h2>Profile</h2>\\n <button onClick={handleUpdate}>Update Profile</button>\\n </div>\\n );\\n};\\n\\n// Dashboard Component with Nested Components\\nconst Dashboard = () => {\\n return (\\n <div>\\n <h1>Dashboard</h1>\\n <Home />\\n <Profile />\\n </div>\\n );\\n};\\n\\nexport default function App() {\\n return (\\n <ToastProvider>\\n <Navbar />\\n <Dashboard />\\n </ToastProvider>\\n );\\n}\\n\\n
The useToast()
custom Hook promotes code reusability by providing a simple API to access toast functionality in any child component:
// props drilling\\n<Navbar addToast={addToast} />\\n<Dashboard addToast={addToast} />\\n<Home addToast={addToast} />\\n<Profile addToast={addToast} />\\n\\n
Instead of passing the addToast()
function as a prop, Context has enabled the child components to trigger toast messages directly. This makes the approach scalable as you add more components to your app.
This example demonstrates how React Context can be used to manage shared functionalities like toast messages.
\\nuse()
HookThe use()
Hook in React is a special API introduced to simplify the interaction between components and asynchronous data and context. It enables a more flexible approach than the traditional useContext()
Hook, allowing us to conditionally read values from a context or handle promises directly within a component.
In this example, we will review how use()
Hook, can be used to access user data from Context value.
In the <UserProvider />
component, we have a mock user object with email
and mobile
properties. This component wraps its children with <UserContext />
, providing user data to any component inside it.
The useUser()
custom Hook is defined to access the UserContext
value using the use()
Hook. This custom Hook can be used inside an if condition or a loop since it uses use()
Hook under the hood:
// UserContext.jsx\\nimport React, { createContext, use } from \\"react\\";\\n\\nexport const UserContext = createContext(null);\\n\\nexport const UserProvider = ({ children }) => {\\n const user = {\\n email: \\"[email protected]\\",\\n mobile: \\"123-456-7890\\",\\n };\\n\\n return <UserContext value={user}>{children}</UserContext>;\\n};\\n\\nexport const useUser = () => use(UserContext);\\n\\n
In the App.jsx, we have created a <ProfileDetails />
component to display the user data. Initially, both mobile and email are masked to indicate sensitive information. The showData
state variable is used to track whether to unmask the data. When the showData
flag is true
, the user data is updated by accessing the value from the useUser()
custom Hook:
// App.jsx\\nimport { useState } from \\"react\\";\\nimport \\"./styles.css\\";\\nimport { UserProvider, UserContext, useUser } from \\"./UserContext\\";\\n\\nconst ProfileDetails = () => {\\n let mobile = \\"****\\";\\n let email = \\"****\\";\\n const [showData, setShowData] = useState(false);\\n\\n const toggleData = () => {\\n setShowData((prev) => !prev);\\n };\\n\\n if (showData) {\\n const user = useUser();\\n mobile = user.mobile;\\n email = user.email;\\n }\\n\\n return (\\n <div>\\n <h2>Profile Details</h2>\\n <button onClick={toggleData}>\\n {showData ? \\"Hide Data\\" : \\"Show Data\\"}\\n </button>\\n <p>Mobile: {mobile}</p>\\n <p>Email: {email}</p>\\n </div>\\n );\\n};\\n\\nexport default function App() {\\n return (\\n <UserProvider>\\n <ProfileDetails />\\n </UserProvider>\\n );\\n}\\n\\n
In this example, we saw how the new use()
Hook can be used to access React Context and conditionally reveal or hide data based on user interaction.
In this example, we’ll see how the [useReducer](https://blog.logrocket.com/react-usereducer-hook-ultimate-guide/)
Hook can be used with React Context. We will build a simple shopping cart app that will allow users to add, remove and adjust the quantity of the cart items.
Let’s start by defining the Context:
\\nconst CartContext = createContext();\\n\\n
The CartContext
will provide the state and updater functions for the cart items to the child components.
Now, let’s define a set of action types. These constants will help us identify what kind of update we want to do on the cart items:
\\nconst ADD_TO_CART = \\"ADD_TO_CART\\";\\nconst REMOVE_FROM_CART = \\"REMOVE_FROM_CART\\";\\nconst INCREMENT_QUANTITY = \\"INCREMENT_QUANTITY\\";\\nconst DECREMENT_QUANTITY = \\"DECREMENT_QUANTITY\\";\\n\\n
These constants will be used by the reducer function to handle the dispatched actions.
\\nNext, we will define the state of the cart. The cart starts as an empty array, and will be updated as products are added by the user:
\\nconst initialState = {\\n cart: [],\\n};\\n\\n
The reducer function is where we will manage how the cart state updates in response to the defined actions.
\\nADD_TO_CART
— Add a product or increase its quantity if it already existsREMOVE_FROM_CART
— Remove a product from the cartINCREMENT_QUANTITY
— Increase the quantity of a productDECREMENT_QUANTITY
— Decrease the quantity of a productHere’s our reducer function:
\\nfunction reducer(state, action) {\\n switch (action.type) {\\n case ADD_TO_CART: {\\n const existingProductIndex = state.cart.findIndex(\\n (item) => item.id === action.product.id\\n );\\n\\n if (existingProductIndex >= 0) {\\n const newCart = [...state.cart];\\n newCart[existingProductIndex].quantity += 1;\\n return { ...state, cart: newCart };\\n }\\n\\n return {\\n ...state,\\n cart: [...state.cart, { ...action.product, quantity: 1 }],\\n };\\n }\\n case REMOVE_FROM_CART:\\n return {\\n ...state,\\n cart: state.cart.filter((item) => item.id !== action.productId),\\n };\\n case INCREMENT_QUANTITY: {\\n const newCart = state.cart.map((item) =>\\n item.id === action.productId\\n ? { ...item, quantity: item.quantity + 1 }\\n : item\\n );\\n return { ...state, cart: newCart };\\n }\\n case DECREMENT_QUANTITY: {\\n const newCart = state.cart.map((item) =>\\n item.id === action.productId && item.quantity > 1\\n ? { ...item, quantity: item.quantity - 1 }\\n : item\\n );\\n return { ...state, cart: newCart };\\n }\\n default:\\n return state;\\n }\\n}\\n\\n
Next, we will implement the main component that will use the useReducer
Hook to manage the cart’s state. The useReducer
Hook returns the current state and a dispatch function to trigger state updates:
import React, { useReducer, useContext } from \\"react\\";\\n\\nfunction MyApp() {\\n const [state, dispatch] = useReducer(reducer, initialState);\\n\\n const addToCart = (product) => {\\n dispatch({ type: ADD_TO_CART, product });\\n };\\n\\n const removeFromCart = (productId) => {\\n dispatch({ type: REMOVE_FROM_CART, productId });\\n };\\n\\n const incrementQuantity = (productId) => {\\n dispatch({ type: INCREMENT_QUANTITY, productId });\\n };\\n\\n const decrementQuantity = (productId) => {\\n dispatch({ type: DECREMENT_QUANTITY, productId });\\n };\\n\\n const cartValue = {\\n cart: state.cart,\\n addToCart,\\n removeFromCart,\\n incrementQuantity,\\n decrementQuantity,\\n };\\n\\n return (\\n <CartContext value={cartValue}>\\n <div className=\\"container\\">\\n <ProductList />\\n <Cart />\\n </div>\\n </CartContext>\\n );\\n}\\n\\n
In the above code snippet, functions like addToCart()
and removeFromCart()
use the dispatch()
function to trigger the actions, and the <CartContext />
components wrap the <ProductList />
and <Cart />
components so that they can use the context values to read and update the cart state.
Create the <ProductList />
and <Cart />
components as shown in the snippet below:
function ProductList() {\\n const products = [\\n { id: 1, name: \\"Product 1\\", price: 29.99 },\\n { id: 2, name: \\"Product 2\\", price: 49.99 },\\n { id: 3, name: \\"Product 3\\", price: 19.99 },\\n ];\\n\\n const { addToCart } = useContext(CartContext);\\n\\n return (\\n <div>\\n <h2>Product List</h2>\\n <ul>\\n {products.map((product) => (\\n <li key={product.id}>\\n {product.name} - ${product.price.toFixed(2)}\\n <button onClick={() => addToCart(product)}>Add to Cart</button>\\n </li>\\n ))}\\n </ul>\\n </div>\\n );\\n}\\n\\nfunction Cart() {\\n const { cart, removeFromCart, incrementQuantity, decrementQuantity } =\\n useContext(CartContext);\\n\\n return (\\n <div>\\n <h2>Shopping Cart</h2>\\n <ul>\\n {cart.map((item) => (\\n <li key={item.id}>\\n {item.name} - ${item.price.toFixed(2)} x {item.quantity}\\n <span className=\\"cart-buttons\\">\\n <button onClick={() => decrementQuantity(item.id)}>-</button>\\n <button onClick={() => incrementQuantity(item.id)}>+</button>\\n <button onClick={() => removeFromCart(item.id)}>Remove</button>\\n </span>\\n </li>\\n ))}\\n </ul>\\n </div>\\n );\\n}\\n\\n
The <ProductList />
component displays products and enables the user to add them to the cart. The <Cart />
component shows items in the cart and provides buttons to adjust quantities or remove the items.
This approach is ideal for managing complex state logic and sharing the state across multiple child components. It enables you to keep the business logic clean and in a centralized manner making the app easier to maintain.
\\nuseState()
and useReducer()
When deciding between
useState()
and useReducer()
, you should carefully review your app’s use case and state logic.
For example, useState()
works great when working with independent pieces of state, like toggling a switch or a dialog box, managing form inputs, etc.
On the other hand, using useReducer()
is preferable when you have complex state logic where the new state depends on the previous state’s value. It centralizes the state update logic into a single function, as we have seen in the shopping cart example. If your component’s state management starts to get complicated with useState()
, it’s a good sign to consider switching to useReducer()
for a more organized approach.
You can check out the working demo here.
\\nWhen building React applications, especially larger ones, managing how components render becomes important, as it directly impacts the performance of the application. Using React Context is a great way to share data across components, but it can lead to unnecessary re-renders if not used carefully.
\\nLet’s explore how to optimize React Context using a simple task management app as an example:
\\nimport React, { useState, useContext, useEffect } from \\"react\\";\\n\\nconst TaskContext = React.createContext();\\n\\nfunction TaskProvider({ children }) {\\n const [tasks, setTasks] = useState([\\n { id: 1, text: \\"Design homepage\\", completed: false },\\n { id: 2, text: \\"Develop backend\\", completed: false },\\n ]);\\n\\n const addTask = (taskText) => {\\n setTasks((prevTasks) => [\\n ...prevTasks,\\n { id: Date.now(), text: taskText, completed: false },\\n ]);\\n };\\n\\n const toggleTaskCompletion = (taskId) => {\\n setTasks((prevTasks) =>\\n prevTasks.map((task) =>\\n task.id === taskId ? { ...task, completed: !task.completed } : task\\n )\\n );\\n };\\n\\n const contextValue = {\\n tasks,\\n addTask,\\n toggleTaskCompletion,\\n };\\n\\n return (\\n <TaskContext value={contextValue}>{children}</TaskContext>\\n );\\n}\\n\\nfunction TaskList() {\\n const { tasks, toggleTaskCompletion } = useContext(TaskContext);\\n\\n return (\\n <div>\\n <h2>Task List</h2>\\n <ul>\\n {tasks.map((task) => (\\n <li key={task.id}>\\n <span\\n style={{\\n textDecoration: task.completed ? \\"line-through\\" : \\"none\\",\\n }}\\n >\\n {task.text}\\n </span>\\n <button onClick={() => toggleTaskCompletion(task.id)}>\\n {task.completed ? \\"Undo\\" : \\"Complete\\"}\\n </button>\\n </li>\\n ))}\\n </ul>\\n </div>\\n );\\n}\\n\\nfunction AddTask() {\\n const { addTask } = useContext(TaskContext);\\n\\n useEffect(() => {\\n console.log(`<AddTask />`);\\n });\\n\\n const handleAddTask = () => {\\n const taskText = prompt(\\"Enter task description:\\");\\n if (taskText) {\\n addTask(taskText);\\n }\\n };\\n\\n return (\\n <div>\\n <h2>Add New Task</h2>\\n <button onClick={handleAddTask}>Add Task</button>\\n </div>\\n );\\n}\\n\\nfunction App() {\\n return (\\n <TaskProvider>\\n <TaskList />\\n <AddTask />\\n </TaskProvider>\\n );\\n}\\n\\nexport default App;\\n\\n
In the above code, we have a TaskContext
that contains the task items and the addTask()
function. While this is straightforward, it comes with a downside: each time an item gets added or completed, the context value changes. Additionally, all the components using the TaskContext
change. This includes the <AddTask />
component, which is only concerned with adding items and not displaying it.
This occurs because the context value is an object, which means it will be recreated on each render. Thus, React thinks that the context value has changed.
\\nSo how do we optimize this? We have to separate the context into two contexts: one for task items and another for task actions. By splitting states and actions into different contexts, we ensure that components only react to the data they need. For instance, <TaskList />
only cares about the task items, while <AddTask />
only needs to know how to add a new task:
const TaskContext = React.createContext();\\nconst TaskActionContext = React.createContext();\\n\\nfunction TaskProvider({ children }) {\\n const [tasks, setTasks] = useState([\\n { id: 1, text: \\"Design homepage\\", completed: false },\\n { id: 2, text: \\"Develop backend\\", completed: false },\\n ]);\\n\\n const taskStateValue = {\\n tasks,\\n };\\n\\n const addTask = useCallback((taskText) => {\\n setTasks((prevTasks) => [\\n ...prevTasks,\\n { id: Date.now(), text: taskText, completed: false },\\n ]);\\n }, []);\\n\\n const toggleTaskCompletion = useCallback((taskId) => {\\n setTasks((prevTasks) =>\\n prevTasks.map((task) =>\\n task.id === taskId ? { ...task, completed: !task.completed } : task\\n )\\n );\\n }, []);\\n\\n const taskActionValue = useMemo(\\n () => ({\\n addTask,\\n toggleTaskCompletion,\\n }),\\n [addTask, toggleTaskCompletion]\\n );\\n\\n return (\\n <TaskContext value={taskStateValue}>\\n <TaskActionContext value={taskActionValue}>{children}</TaskActionContext>\\n </TaskContext>\\n );\\n}\\n\\n
We also wrap the action functions with useCallback()
to make sure that they remain stable across renders. This is important for preventing unnecessary updates in components that consume these functions.
This not only improves performance but also makes the app more predictable and easier to maintain. When a task is toggled, only the <TaskList />
updates, not the <AddTask />
component, because we have clearly defined what each component cares about.
The optimized approach discussed in the previous example may work well for most use cases. It might not suffice when your applications grow in complexity.
\\nIf you find yourself handling deeply nested components or managing a large global state, it might be time to consider a state management library like Redux. Redux provides a more structured way to manage state changes and can handle complex state updates more efficiently than context alone.
\\nDoes Redux replace React Context? The short answer is no, it doesn’t. Context and Redux are two different tools, and comparison often arises from misconceptions about what each tool is designed for. Although Context can be orchestrated to act as a state management tool, it wasn’t designed for that purpose, so you’d have to put in extra effort to make it work. There are already many state management tools that work well and will ease your troubles.
\\nChoosing between React Context and Redux should be based on the complexity and needs of your application’s data and business logic. React Context is effective for avoiding props drilling and simple state management. State management libraries like Redux, Zustand, etc. are better for use cases that involve complex states in large-scale or enterprise-level applications. They also provide access to advanced features like time-travel debugging, async middleware, action logging, etc.
\\nIn my experience with Redux, it can be relatively complex to achieve something that is easier to solve today with Context. Keep in mind that prop drilling and global state management are where Redux and Context’s paths cross. Redux has more functionality in this area. Ultimately, Redux and Context should be considered complementary tools that work together instead of as alternatives. My recommendation is to use Redux for complex global state management and Context for prop drilling.
\\nIn this article, we reviewed what React Context is, when we should use it to avoid prop drilling, its use cases with examples, and how we can use Context most effectively. We also cleared up some misconceptions surrounding React Context and Redux.
\\nThe main takeaways from this article include the following:
\\nI hope you enjoyed this tutorial!
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nThe ES2015 standard introduced arrow functions to JavaScript. Arrow functions have a simpler syntax than standard functions, but we’ll also see that there are some important differences in how they behave.
\\nArrow functions can be used almost anywhere a standard function expression can be used, with a few exceptions. They have a compact syntax, and like standard functions, have an argument list, a body, and a possible return value.
\\nWe’ll explore arrow functions in detail below, but in general they should be avoided any time you need a new this
binding. Arrow functions don’t have their own this
; they inherit the this
from the outer scope.
Arrow functions also can’t be used as constructors or generator functions, as they can’t contain a yield
statement.
An arrow function consists of a list of arguments, followed by an arrow (made with an equals sign and a greater-than sign (=>
), followed by the function body. Here’s a simple example of an arrow function that takes a single argument:
const greet = name => {\\n console.log(`Hello, ${name}!`);\\n};\\n\\n
You can optionally also surround the argument with parentheses:
\\nconst greet = (name) => {\\n console.log(`Hello, ${name}!`);\\n}\\n\\n
If an arrow function takes more than one argument, the parentheses are required. Like a standard function, the argument names are separated by commas:
\\nconst sum = (a, b) => {\\n return a + b;\\n}\\n\\n
An anonymous arrow function has no name. These are typically passed as callback functions:
\\nbutton.addEventListener(\'click\', event => {\\n console.log(\'You clicked the button!\');\\n});\\n\\n
If your arrow function body is a single statement, you don’t even need the curly braces:
\\nconst greet = name => console.log(`Hello, ${name}!`);\\n\\n
One of the important differences between JavaScript arrow functions and standard functions is the idea of an implicit return: returning a value without using a return
statement.
If you omit the curly braces from an arrow function, the value of the function body’s expression will be returned from the function without needing a return
statement. Let’s revisit the sum
function from earlier. This can be rewritten to use an implicit return:
const sum = (a, b) => a + b;\\n\\n
Implicit return is handy when creating callback functions:
\\nconst values = [1, 2, 3];\\nconst doubled values = values.map(value => value * 2); // [2, 4, 6]\\n\\n
You can return any kind of value you want with an implicit return, but you’ll need a little extra help if you want to return an object. Since an object literal uses curly braces, JavaScript will interpret the curly braces as the function body. Consider this example:
\\nconst createUser = (name, email) => { name, email };\\n\\n
In this case, there will be no implicit return and the function will actually return undefined
because there is no return
statement. To return an object implicitly, you need to wrap the object with parentheses:
const createUser = (name, email) => ({ name, email });\\n\\n
Now JavaScript knows this is an implicit return of an object containing the name
and email
properties.
Like with standard functions, an arrow function can explicitly return a value with a return
statement:
const createUser = (name, email) => {\\n return { name, email };\\n};\\n\\n
Arrow functions behave differently from standard functions in some other ways.
\\nthis
bindingThe most significant difference is that, unlike a standard function, an arrow function doesn’t create a this
binding of its own. Consider the following example:
const counter = {\\n value: 0,\\n increment: () => {\\n this.value += 1;\\n }\\n};\\n\\n
Because the increment
method is an arrow function, the this
value in the function does not refer to the counter
object. Instead, it inherits the outer this
, which in this example would be the global window object.
As you might expect, if you call counter.increment()
, it won’t change counter.value
. Instead, this.value
will be undefined
since this
refers to the window.
Sometimes, you can use this to your advantage. There are cases where you do want the outer this
value from within a function. This is a common scenario when using callback functions. Before arrow functions, you’d have to call bind
on a function to force it to have a certain this
, or you might have followed a pattern like this:
var self = this;\\nsetTimeout(function() {\\n console.log(self.name);\\n}, 1000);\\n\\n
With an arrow function, you get the this
from the enclosing scope:
setTimeout(() => console.log(this.name));\\n\\n
arguments
objectIn a standard function, you can reference the arguments
object to get information about the arguments passed to the function call. This is an array-like object that holds all the argument values. In the past, you might have used this to write a variadic function.
Consider this sum
function, which supports a variable number of arguments:
function sum() {\\n let total = 0;\\n for (let i = 0; i < arguments.length; i++) {\\n total += arguments[i];\\n }\\n\\n return total;\\n}\\n\\n
You can call sum
with any number of arguments:
sum(1, 2, 3) // 6\\n\\n
If you implement sum
as an arrow function, there won’t be an arguments
object. Instead, you’ll need to use the rest parameter syntax:
const sum = (...args) => {\\n let total = 0;\\n for (let i = 0; i < args.length; i++) {\\n total += args[i];\\n }\\n\\n return args;\\n}\\n\\n
You can call this version of the sum
function the same way:
sum(1, 2, 3) // 6\\n\\n
This syntax isn’t unique to arrow functions, of course. You can use the rest parameter syntax with standard functions, too. In my experience with modern JavaScript, I don’t really see the arguments
object being used anymore, so this distinction may be a moot point.
Standard JavaScript functions have a prototype
property. Before the introduction of the class
syntax, this was the way to create objects with new
:
function Greeter() { }\\nGreeter.prototype.sayHello = function(name) {\\n console.log(`Hello, ${name}!`);\\n};\\n\\nnew Greeter().sayHello(\'Joe\'); // Hello, Joe!\\n\\n
If you try this with an arrow function, you’ll get an error. This is because arrow functions don’t have a prototype:
\\nconst Greeter = () => {};\\nGreeter.prototype.sayHello = name => console.log(`Hello, ${name}!`);\\n// TypeError: Cannot set properties of undefined (setting \'sayHello\')\\n\\n
Arrow functions can be used in a lot of scenarios, but there are some situations where you still need to use a standard function expression. These include:
\\nthis
valuethis
value with Function.prototype.bind
yield
statementsArrow functions particularly shine when used as callback functions, due to their terse syntax. In particular, they are very useful for array methods such as forEach
, map
, and filter
. You can use them as object methods, but only if the method doesn’t try to access the object using this
.
The arrow function is very useful in certain situations. But like most things, arrow functions have potential pitfalls if you don’t use them correctly.
\\n\\nHere’s how you’d define a method using an arrow function:
\\nclass Person {\\n constructor(name) {\\n this.name = name;\\n }\\n\\n greet = () => console.log(`Hello, ${this.name}!`);\\n}\\n\\n
Unlike a method on an object literal — which as we saw earlier does not get the this
value — here the greet
method gets its this
value from the enclosing Person
instance. Then, no matter how the method is called, the this
value will always be the instance of the class. Consider this example that uses a standard method with setTimeout
:
class Person {\\n constructor(name) {\\n this.name = name;\\n }\\n\\n greet() {\\n console.log(`Hello, ${this.name}!`);\\n }\\n\\n delayedGreet() {\\n setTimeout(this.greet, 1000);\\n }\\n}\\n\\nnew Person(\'Joe\').delayedGreet(); // Hello, undefined!\\n\\n
When the greet
method is called from the setTimeout
call, its this
value becomes the global window object. The name
property isn’t defined there, so you’ll get Hello, undefined!
when you call the delayedGreet
method.
If you define greet
as an arrow function instead, it will still have the enclosing this
set to the class instance, even when called from setTimeout
:
class Person {\\n constructor(name) {\\n this.name = name;\\n }\\n\\n greet = () => console.log(`Hello, ${this.name}!`);\\n\\n delayedGreet() {\\n setTimeout(this.greet, 1000);\\n }\\n}\\n\\nnew Person(\'Joe\').delayedGreet(); // Hello, Joe!\\n\\n
You can’t, however, define the constructor as an arrow function. If you try, you’ll get an error:
\\nclass Person {\\n constructor = name => {\\n this.name = name;\\n }\\n}\\n\\n// SyntaxError: Classes may not have a field named \'constructor\'\\n
Since the arrival of the ES2015 standard, JavaScript programmers have had arrow functions in their toolbox. Their main strength is the abbreviated syntax; you don’t need the function
keyword, and with implicit return you don’t need a return
statement.
The lack of a this
binding can cause confusion, but is also handy when you want to preserve the enclosing this
value to another function when passed as a callback.
Consider this chain of array operations:
\\nconst numbers = [1, 2, 3, 4]\\n .map(function(n) {\\n return n * 3;\\n })\\n .filter(function(n) {\\n return n % 2 === 0;\\n });\\n\\n
This looks fine, but it’s a little verbose. With arrow functions, the syntax is cleaner:
\\nconst numbers = [1, 2, 3, 4]\\n .map(n => n * 3)\\n .filter(n => n % 2 === 0);\\n\\n
Arrow functions don’t have an arguments
object, but they do support rest parameter syntax. This makes it easy to build arrow functions that take a variable number of arguments.
The main advantages of arrow functions are enhanced readability as well as the different this
behavior, which will make life easier in certain situations where you need to preserve an outer this
value.
Would you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nnull
check function in JavaScript\\n null
and undefined
in JavaScript\\n null
, and undefined
\\n As someone with a toddler, it’s surprising just how many things in our life are a “learned skill”.
\\nEven things we take for granted, like eating. You or I could suck down any variety of foods without a second thought, while parents stare nervously at their firstborn eating a banana, ready to whack their back at the first sign of difficulty.
\\nChecking for null
can be nerve-wracking for both new and seasoned JavaScript developers. It’s something that should be very simple, but still bites a surprising amount of people.
The basic reason for this is that in most languages, we only have to cater to null
. But in JavaScript, we have to cater to both null
and undefined
. How do we do that?
null
check function in JavaScriptWe can call this the “I don’t care about the story, I just want to know how to do it” section.
\\nThese days whenever you Google a recipe for toast, you get a 5,000-word essay before the writer tells you to put the bread in the oven. Let’s not be like that. Checking for null
in JavaScript can be achieved like so.
Let’s imagine our test data object like this:
\\nvar testObject = {\\n empty: \'\',\\n isNull: null,\\n isUndefined: undefined,\\n zero: 0\\n}\\n>\\n
Our null
check function looks like this, where we just simply check for null
:
function isNull(value){\\n if (value == null){\\n return true\\n }\\n else{\\n return false\\n }\\n}\\n\\n
To test this function, let’s put some values into it and see how it goes:
\\nconsole.log(`\\n testObject.empty: ${isNull(testObject.empty)}\\n testObject.isNull: ${isNull(testObject.isNull)}\\n testObject.isUndefined: ${isNull(testObject.isUndefined)}\\n zero: ${isNull(testObject.zero)} \\n `)\\n\\n
The output is what we would expect, and similar to what other languages would provide:
\\nC:\\\\Program Files\\\\nodejs\\\\node.exe .\\\\index.js\\n\\n testObject.empty: false\\n testObject.isNull: true\\n testObject.isUndefined: true\\n zero: false \\n\\n
If objects are null
or undefined
, then this function would return true.
null
, undefined
, or an empty stringWhat if we want to check if a variable is null
, or if it’s simply empty? We can check for this by doing the following:
function isNullOrEmpty(value){\\n return !value;\\n}\\n\\n
This depends on the object’s “truthiness”. “Truthy” values like “words” or numbers greater than zero would return true, whereas empty strings would return false.
\\nWe have to be a little careful about this application. Let’s consider form entry. For instance, if there was an empty string in the form, then it would be acceptable to say that the field isn’t filled out.
\\nHowever, if the user gave a single ’0’ in the form, then 0 would also evaluate to false
. In the case of form validation, this wouldn’t work the way we would expect. Empty arrays would also evaluate to false
, so the result of an array not existing, and an array existing and not having any values in it, would essentially be the same. This is probably not what you want.
Ah boy this is getting complex. Why is it though? Let’s dig in a bit.
\\n\\nnull
and undefined
in JavaScriptThere are probably hundreds, if not thousands, of posts and StackOverflow entries on this topic. It’s simple – the behavior of null
and undefined
is a bit wily to developers, both new and old. If we get it wrong, websites break, or our node apps stop working. So we really want to dial it in and make sure it works the way we expect.
Add into the mix that JavaScript has been around since 1995. This also presents problems. JavaScript is used on almost every webpage today, so core features simply cannot be rewritten or reimplemented. If, overnight, a change was made to how null
or undefined
was handled in browsers and frameworks like Node.js, the carnage would be huge. It would dwarf the Crowdstrike outage, for instance.
The reason for this is that most languages only use null
, and undefined
is something that only is used in JavaScript. While null
is appropriate to represent the absence of a value, typically instead of returning undefined
in other languages, those languages would throw an exception.
For example, in C#, if we wrote the following:
\\nstring testString;\\nConsole.WriteLine(testString);\\n\\n
Our code would throw with “Use of unassigned local variable testString
”. The compiler is performing some analysis and telling us that we can’t use a variable that hasn’t been assigned. In other words, it’s undefined.
In C# (and a lot of other languages) we never run the risk of possibly using things that are undefined, because something throws an error before we’re in that situation. Even if you were to do things in other languages that would throw, such as access an entry in an array that is out of bounds, C# would throw, whereas JavaScript would simply return undefined
.
Consider:
\\nlet array = [];\\n console.log(array[5])\\n\\n
We’ve got an empty array, and then we try to print out the fifth element from the array. This is out of bounds. The result of this code is undefined
.
null
, and undefined
Basically, there are three conditions that we want to account for when checking for values that we think could be null
or undefined
. To help visualize this, let’s imagine that we have a blue box that is our variable, and the things that we place in this box represent the things we assign to the variable:
There are three states that our box can be in:
\\nThe value is assigned. Regardless of what that value is, we know that a value has been assigned to an object because it is not null
or undefined
. Because of this, there is an object present in the box/the variable:
The value is null
. The box is still there, but it has nothing in it:
The value is undefined
. The box does not exist.
Most of the time, checking that an item is null
will be enough. But because we have both undefined
and null
to cater to, and both can mean different things, whenever we perform a check for null
, we need to think about exactly what kind of check we are trying to perform and act accordingly.
Because null
is a “falsey” value, it can be tempting to write code like if (!value)
to do something if a variable is null
. But, as we’ve seen, that can also permit empty strings and empty arrays to slip through that check.
Understanding these key differences can help us to write high–quality code that doesn’t behave in unexpected ways. And that’s what we should always aim to do, even if it takes a bit longer.
\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nNode.js has long been the go-to tool for developing web applications with intensive I/O-bound operations, thanks to its event loop interface and asynchronous nature, which delegate I/O operations without blocking the main thread.
\\nWhile this design simplifies development and avoids common threading issues like race conditions and deadlocks, its single-threaded nature presents significant limitations — especially in an era where multi-core processors and computationally intensive applications are standard.
\\nIn this article, we’ll discuss the benefits of parallel computing and how to leverage its capabilities in your Node.js applications.
\\nNode.js runs on V8, Google’s JavaScript engine, which executes code in a single thread. By default, all computations happen sequentially in one process, using only a single CPU core.
\\nThis model works well for I/O operations but is not as effective for CPU-intensive tasks. Consider this example:
\\nfunction calculatePrimes(max) {\\n const primes = [];\\n for (let i = 2; i < max; i++) {\\n let isPrime = true;\\n for (let j = 2; j < i; j++) {\\n if (i % j === 0) {\\n isPrime = false;\\n break;\\n }\\n }\\n if (isPrime) primes.push(i);\\n }\\n return primes;\\n}\\n\\ncalculatePrimes(1000000);\\n\\n
This code will block the event loop when it runs. This is because the event loop in JavaScript relies on asynchronous operations to avoid blocking the main thread.
\\nIt does this by offloading I/O-related tasks (e.g., network requests, file system operations) to the operating system. At the same time, the event loop continues to process other tasks as it waits for the I/O operations to complete.
\\nHowever, the calculatePrimes
function in this example is synchronous and executes line-by-line, which means it’ll occupy the thread until completion, thus preventing the event loop from processing other events and blocking the main thread, making the application unresponsive during this time.
To learn more about the event loop, check out our comprehensive article on the topic.
\\nModern hardware typically comes with multiple CPU cores. Leveraging these cores is crucial for applications to be performant and fast. However, Node.js’ single-threaded design limits it to just one of these cores, which leaves significant processing power untapped.
\\nWhile this single thread is enough for Node.js to execute I/O operations efficiently, it struggles with tasks like:
\\nParallel computing allows developers to use the hardware’s multi-core to create and manage multiple threads in parallel within the same Node.js process.
\\nThere are several ways to achieve parallel computing but “workers threads” are considered the herald of true parallel computing in Node.js due to their fine-grained and simplified approach to managing threads without the complexities of low-level thread management.
\\n\\nWorker threads were introduced in Node.js v10.5 with the sole purpose of offloading CPU-intensive operations from the event loop. They allow developers to create isolated execution environments that can run simultaneously across different CPU cores.
\\nEssentially, each worker thread is a separate V8 JavaScript runtime that has its own memory space, event loop, and execution context and is completely independent of the main thread.
\\nThis way, the worker threads can execute CPU-intensive tasks in their environment and only communicate to the parent thread using a messaging channel without affecting the parent’s usual function or blocking it.
\\nNode.js provides a built-in worker_threads
library, which offers high-level methods and functions for effectively implementing and managing worker threads.
This method is used in the main thread to create and manage new workers. It accepts a path to a worker script and an optional object for passing data to the worker:
\\n// index.js : Main Thread\\n\\nconst { Worker } = require(\'worker_threads\');\\n\\nconst worker = new Worker(\'./worker.js\', {\\n workerData: { task: \'exampleTask\' } // Pass data to the worker\\n});\\n\\n
The method also listens for messages from the worker using the message
event and handles errors and exits with error
and exit
events:
worker.on(\'message\', (result) => {\\n console.log(\'Worker result:\', result);\\n});\\n\\nworker.on(\'error\', (err) => {\\n console.error(\'Worker error:\', err);\\n});\\n\\nworker.on(\'exit\', (code) => {\\n console.log(`Worker exited with code ${code}`);\\n});\\n\\n
workerData
The workerData
method is typically used to pass task-specific configuration or initial parameters from the main thread to the worker thread when the worker is created. This data is available only when the worker is initialized:
// index.js : Main Thread\\n\\nconst { Worker } = require(\'worker_threads\');\\n\\nconst worker = new Worker(\'./worker.js\', {\\n workerData: { name: \'Node.js\', version: 18 }\\n});\\n\\n
The data can be accessed in the worker thread via require(\'worker_threads\').workerData
in the worker script:
// worker.js\\n\\nconst { workerData } = require(\'worker_threads\');\\n\\nconsole.log(\'Received data:\', workerData); // Output: { name: \'Node.js\', version: 18 }\\n\\n
parentPort
This method is used within the worker thread to communicate with the main thread. It establishes a two-way messaging channel for sending and receiving messages. The worker thread uses its postMessage
method to send data back to the main thread:
// worker.js\\n\\nconst { parentPort } = require(\'worker_threads\');\\n\\nparentPort.on(\'message\', (message) => {\\n console.log(\'Received from main thread:\', message);\\n\\n // Process the task and send a result\\n const result = message * 2;\\n parentPort.postMessage(result);\\n});\\n\\n
The worker thread then listens for messages from the main thread using the message
event:
// main.js\\nconst { Worker } = require(\'worker_threads\');\\n\\nconst worker = new Worker(\'./worker.js\');\\n\\nworker.on(\'message\', (result) => {\\n console.log(\'Result from worker:\', result); // Output: 20 (for input 10)\\n});\\n\\nworker.postMessage(10); // Send a task to the worker\\n\\n
Note that the worker.postMessage
method is used instead of the workerData
method. It is employed when a second argument is not passed to the Worker
class during initialization.
To illustrate how you can integrate worker threads into your applications, let’s consider a practical, real-world scenario.
\\nImagine you need to perform various image transformations such as resizing, grayscaling, and rotation on a large collection of image files. Without worker threads, this would congest the event loop and block the main thread.
\\nBy using worker threads, you can distribute these tasks across multiple CPU cores to ensure that your application’s performance remains smooth and responsive while efficiently processing all the images.
\\nI’ll assume you already have a Node.js project set up. If you don’t, you can check out our guide on how to set one up.
\\nOnce your project is ready, copy and run the following command in your terminal to install sharp, a high-performance image processing library for Node.js:
\\nnpm install sharp\\n\\n
Here is our project structure:
\\nimage-processor-project/\\n│\\n├── src/\\n│ ├── imageProcessor.js\\n│ └── imageWorker.js\\n│\\n├── images/\\n│ ├── image1.jpg\\n│ ├── image2.png\\n│ └── image3.jpeg\\n│\\n├── package.json\\n└── index.js\\n\\n
This project structure separates concerns in the application, most especially the worker thread codes from the main thread code. This way we avoid ambiguity and eliminate the need for checks like this:
\\nif (isMainThread){\\n // Run main thread code\\n }else{\\n // Run worker thread code\\n }\\n\\n
After setting up the project structure, open the imageProcessor.js
file and include the code needed to create workers:
const { Worker } = require(\\"worker_threads\\");\\nconst path = require(\\"path\\");\\n\\nclass ImageProcessor {\\n constructor(maxConcurrency = require(\\"os\\").cpus().length) {\\n this.maxConcurrency = maxConcurrency;\\n }\\n\\n async processImages(imagePaths, processingOptions) {\\n return new Promise((resolve, reject) => {\\n const results = [];\\n let activeWorkers = 0;\\n let completedWorkers = 0;\\n const queue = [...imagePaths];\\n\\n const processNextImage = () => {\\n if (queue.length === 0 || activeWorkers >= this.maxConcurrency) {\\n return;\\n }\\n\\n const imagePath = queue.shift();\\n activeWorkers++;\\n\\n const worker = new Worker(path.resolve(__dirname, \\"imageWorker.js\\"), {\\n workerData: {\\n imagePath: imagePath,\\n options: processingOptions,\\n },\\n });\\n\\n worker.on(\\"message\\", (result) => {\\n results.push(result);\\n activeWorkers--;\\n completedWorkers++;\\n\\n processNextImage();\\n\\n if (completedWorkers === imagePaths.length) {\\n resolve(results);\\n }\\n });\\n\\n worker.on(\\"error\\", (error) => {\\n reject(error);\\n });\\n };\\n\\n while (activeWorkers < this.maxConcurrency && queue.length > 0) {\\n processNextImage();\\n }\\n });\\n }\\n\\n async processBatch(imagePaths, batchSize) {\\n const results = [];\\n\\n for (let i = 0; i < imagePaths.length; i += batchSize) {\\n const batchPaths = imagePaths.slice(i, i + batchSize);\\n const batchResults = await this.processImages(batchPaths);\\n results.push(...batchResults);\\n }\\n return results;\\n }\\n}\\n\\nmodule.exports = ImageProcessor;\\n\\n
This code defines an ImageProcessor
class that takes an optional maxConcurrency
argument, which determines the maximum number of workers that can run simultaneously. Inside this class are two methods: processImages
and processBatch
.
The processImages
queues up the images in an array and creates a new Worker
for each image using the imageWorker.js
script and passes imagePath
and processingOptions
as workData
to the thread.
The ProcessBatch
method iterates over the imagePaths
in chunks. For each batch, it calls processImages
to handle the images and waits (await)
for each batch to complete before moving to the next.
Next, go to the imageWorker.js
file and add the following code:
const { parentPort, workerData } = require(\\"worker_threads\\");\\nconst fs = require(\\"fs\\");\\nconst path = require(\\"path\\");\\nconst sharp = require(\\"sharp\\");\\n\\nasync function processImage() {\\n const { imagePath, options } = workerData;\\n try {\\n\\n if (!fs.existsSync(imagePath)) {\\n throw new Error(`Image file not found: ${imagePath}`);\\n }\\n\\n // Read image\\n const inputBuffer = fs.readFileSync(imagePath);\\n let sharpInstance = sharp(inputBuffer);\\n\\n if (options.width || options.height) {\\n sharpInstance = sharpInstance.resize({\\n width: options.width,\\n height: options.height,\\n fit: options.fit || \\"cover\\",\\n });\\n }\\n\\n if (options.rotation) {\\n sharpInstance = sharpInstance.rotate(options.rotation);\\n }\\n\\n if (options.grayscale) {\\n sharpInstance = sharpInstance.grayscale();\\n }\\n\\n const processedImage = await sharpInstance.toBuffer();\\n\\n const outputFilename = `processed_${path.basename(imagePath)}`;\\n const outputPath = path.join(path.dirname(imagePath), outputFilename);\\n\\n // Save processed image\\n await sharp(processedImage).toFile(outputPath);\\n\\n // Send processed image details back to main thread\\n parentPort.postMessage({\\n originalPath: imagePath,\\n outputPath: outputPath,\\n processedSize: processedImage.length,\\n success: true,\\n });\\n } catch (error) {\\n parentPort.postMessage({\\n originalPath: imagePath,\\n error: error.message,\\n success: false,\\n });\\n }\\n}\\n\\nprocessImage();\\n\\n
This code runs on the worker thread. Upon initialization, the worker immediately executes the main function in the script: processImage
. This function uses the sharp library and data sent from the main thread via the workerData
function to transform the image located at the specified path. The output file is then generated with the original filename prefixed by \\"processed_\\"
.
When the operation is done, the result is communicated back to the main thread via the parentPort.postMessage
method, including the:
// Send processed image details back to main thread\\n parentPort.postMessage({\\n originalPath: imagePath,\\n outputPath: outputPath,\\n processedSize: processedImage.length,\\n success: true,\\n });\\n
If there’s an error, the function sends an error message with details and a failure status via postMessage
.
For the final step, go to the index.js
file and add the main thread code:
const path = require(\\"path\\");\\nconst ImageProcessor = require(\\"./src/imageProcessor\\");\\n\\nconst processor = new ImageProcessor(2); // Limit to 2 concurrent workers\\n\\n// Get all image files in the images directory\\nconst imagePaths = [\\n path.resolve(__dirname, \\"images/image1.png\\"),\\n path.resolve(__dirname, \\"images/image2.jpg\\"),\\n path.resolve(__dirname, \\"images/image3.jpg\\"),\\n];\\n\\nconst processingOptions = {\\n width: 800,\\n height: 600,\\n rotation: 90,\\n grayscale: true,\\n fit: \\"contain\\",\\n};\\n\\nasync function processImageFile() {\\n try {\\n const results = await processor.processBatch(\\n imagePaths,\\n imagePaths.length,\\n processingOptions\\n );\\n console.log(\\"Processing Results:\\", results);\\n } catch (error) {\\n console.error(\\"Image processing failed:\\", error);\\n }\\n}\\n\\nprocessImageFile().catch(console.error);\\n\\n
Here, an instance of the ImageProcessor
class is created with a concurrency limit of 2
, meaning only two worker threads will run simultaneously. The main logic is encapsulated in the processImageFile
function, which calls the processBatch
method of the ImageProcessor
instance. This method spawns worker threads to process images concurrently, with a maximum of two threads running at a time.
Like any other tool, parallelism in Node.js is not without its flaws, and implementing it requires careful consideration.
\\nCreating worker threads comes with a cost. Because each thread creates a dedicated instance of the V8 engine, it requires memory allocation and initialization, which can impact performance if threads are frequently created and destroyed. To minimize this overhead, it’s best to use a thread pool or reuse worker threads whenever possible.
\\nTransferring large amounts of data between the worker thread and the main thread can incur significant overhead. This is because data is exchanged through message passing, which requires serialization and deserialization. These operations can be computationally expensive, particularly for large or complex objects.
\\nTo optimize performance, consider using transferable objects (such as ArrayBuffer
), minimizing the frequency and size of data transfers, and leveraging structured cloning efficiently.
As seen in the image processing example, worker thread code can become complex and ambiguous, often leading to what developers refer to as spaghetti code. This can significantly impact readability and maintainability.
\\nTo avoid this, it’s best practice to keep your worker thread code clean, well-structured, and thoroughly documented. This improves understanding and makes future modifications easier.
\\nIt is important to prioritize error handling while working with worker threads. This is not just because of the complexity of the code but also because of the complexity of error propagation.
\\nIn a single-threaded application, unhandled errors naturally bubble up the call stack and cause the application to crash. However, with worker threads, the process changes. Errors in worker threads don’t directly impact the main thread, which can lead to them being silently ignored, thus making them difficult to detect.
\\nTo catch worker errors, you must listen for the error
event on the worker object in the main thread:
// Main thread\\n\\nconst worker = new Worker(\'worker.js\');\\n\\nworker.on(\'error\', (error) => {\\n console.error(\'Worker encountered an error:\', error);\\n});\\n\\n
Another limitation of error handling in worker threads is the restricted stack trace. Because the stack trace is confined to the worker context, it becomes difficult to trace the root cause and fully understand the error’s context.
\\nFor example, say you remove the processingOptions
parameter from the processBatch
method and fail to pass it as an argument to the processImage
method, as shown below:
async processBatch(imagePaths, batchSize) {\\n const results = [];\\n\\n for (let i = 0; i < imagePaths.length; i += batchSize) {\\n const batchPaths = imagePaths.slice(i, i + batchSize);\\n const batchResults = await this.processImages(batchPaths);\\n results.push(...batchResults);\\n }\\n return results;\\n }\\n\\n
You will get the following error in the console:
\\nThis error provides little to no information, such as the file or line of code where the error originates.
\\nTo effectively debug worker threads, you need to:
\\nParallel computing in Node.js unlocks the potential to handle CPU-intensive tasks efficiently. With worker threads, developers are no longer limited to a single thread and can build scalable, high-performance applications that fully leverage modern hardware capabilities.
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nThe MERN stack — MongoDB, Express.js, React, and Node.js — is a set of JavaScript tools that enable developers to build dynamic, performant, and modern websites and applications.
\\nThese four technologies are among the most popular tools in the web development space. Companies like Databricks, AWS, Netflix, Shutterstock, and Meta use one or more of these tools for their websites and other digital platforms.
\\nIn this article, we’ll explore the MERN stack in detail, learn why it’s a favorite among developers, and compare it to alternative JavaScript stacks.
\\nIf you’re more of a hands-on learner, check out our MERN stack tutorial, which walks you through building a CRUD app from scratch.
\\nThe MERN stack is a JavaScript stack consisting of four technologies:
\\nReact is a popular JavaScript framework for handling the frontend and user interface of websites and web apps. It was released on May 29, 2013, and has become one of the leading frontend solutions in web development. It has also led to the creation of meta frameworks such as Next.js, Remix, and Preact, which all use React as their foundation.
\\nThe 2024 Stack Overflow developer survey ranks React as the second most popular web framework, while the State of JavaScript 2024 places it as the most-used web framework.
\\nReact is used by various top brands, including Dropbox, Yahoo, Airbnb, and Netflix. A major reason for React’s wide and continued adoption is the features it provides. The latest version of React, React 19, includes the following features:
\\nuseOptimistic
hook — Allows for optimistic UI updates that automatically revert if an operation failsuseActionState
hook — Simplifies form submissions and actions by automating state updates and error management with a single hookuseFormStatus
hook — Gives you real-time insight into a form’s status from the last submissionMongoDB is an open-source NoSQL document database. Since it release in 2009, MongoDB has become the most popular NoSQL database.
\\nCompanies like L’Oréal, Adobe, Delivery Hero, and Forbes use MongoDB to meet their data storage needs.
\\nWhile it has several applications, MongoDB is mainly used to store structured, semi-structured, and unstructured data. Its features include:
\\nMongoDB stores data in a JSON-like format called BSON (Binary JSON). This format allows for nested objects, arrays, and flexible data types, making it ideal for handling complex data in modern applications.
\\nHere’s a sample of employee data in BSON format:
\\n{\\n \\"_id\\": ObjectId(\\"650d2b7e8c9b3f001e3f4a2d\\"),\\n \\"name\\": \\"John Doe\\",\\n \\"age\\": 30,\\n \\"position\\": \\"Software Engineer\\",\\n \\"hire_date\\": \\"2022-06-15\\",\\n \\"skills\\": [\\"JavaScript\\", \\"React\\", \\"Node.js\\"],\\n \\"address\\": {\\n \\"street\\": \\"123 Main St\\",\\n \\"city\\": \\"New York\\",\\n \\"state\\": \\"NY\\",\\n \\"zip\\": \\"10001\\"\\n },\\n \\"is_active\\": true\\n}\\n\\n
Node.js is an open-source runtime environment that runs on various platforms, including Windows, Unix, Mac OS X, Linux, etc. Created in May 2009, Node.js has become the most popular web framework out there, according to Statista and Stack Overflow.
\\nNode.js enables developers to execute JavaScript outside the browser. This is a game-changer because, before Node.js, JavaScript was mostly limited to web pages. With Node.js, you can use JavaScript for server-side scripting, file system operations, and even building full-fledged applications.
\\nCommon use cases for Node.js include creating microservices, real-time apps, and collaborative tools. You can also use Node.js to build backend services, RESTful APIs, and GraphQL servers.
\\nNode.js powers applications for companies like WhatsApp, Slack, LinkedIn, and GitLab. Its features include:
\\nExpress.js is an open-source, lightweight Node.js framework for creating backend apps. Released in 2010, Express has emerged as the most popular Node.js framework among other alternatives like Nest and Koa.
\\nWhat’s the point of using a framework like Express.js instad of just using Node.js? While Node.js allows you to run JavaScript outside the browser and handle web servers, it doesn’t have built-in tools to manage things like routing or request handling efficiently. That’s where Express comes in.
\\nInstead of writing repetitive code to handle requests, routes, and responses, Express simplifies the process with a minimal and flexible API. It also supports middleware, which allows you to add features such as authentication, logging, and error handling without cluttering your core application logic.
\\nLet’s consider the example of a login form to see just how much Express.js improves backed code. Here’s the login form’s logic written in Node.js:
\\nconst http = require(\'http\'); // Import the built-in HTTP module\\nconst querystring = require(\'querystring\'); // Import module to parse form data\\n\\n// Create a basic HTTP server\\nconst server = http.createServer((req, res) => {\\n if (req.method === \'POST\' && req.url === \'/login\') {\\n let body = \'\';\\n\\n // Collect incoming data chunks\\n req.on(\'data\', chunk => {\\n body += chunk.toString();\\n });\\n\\n // Once all data is received, process it\\n req.on(\'end\', () => {\\n const { username, password } = querystring.parse(body); // Parse form data\\n\\n // Simple authentication check (Replace this with database logic)\\n if (username === \'admin\' && password === \'password123\') {\\n res.writeHead(200, { \'Content-Type\': \'text/plain\' });\\n res.end(\'Login successful\');\\n } else {\\n res.writeHead(401, { \'Content-Type\': \'text/plain\' });\\n res.end(\'Invalid credentials\');\\n }\\n });\\n } else {\\n // Handle other routes\\n res.writeHead(404, { \'Content-Type\': \'text/plain\' });\\n res.end(\'Not Found\');\\n }\\n});\\n\\n// Start the server on port 3000\\nserver.listen(3000, () => console.log(\'Server running on http://localhost:3000\'));\\n\\n
Now, here’s the Express.js version, which is shorter and easier to write and read:
\\nconst express = require(\'express\'); // Import Express framework\\nconst bodyParser = require(\'body-parser\'); // Middleware to parse request body\\n\\nconst app = express();\\n\\n// Middleware to parse form data (application/x-www-form-urlencoded)\\napp.use(bodyParser.urlencoded({ extended: true }));\\n\\n// Define a login route\\napp.post(\'/login\', (req, res) => {\\n const { username, password } = req.body; // Extract username and password from request\\n\\n // Simple authentication check (Replace this with database logic)\\n if (username === \'admin\' && password === \'password123\') {\\n res.status(200).send(\'Login successful\');\\n } else {\\n res.status(401).send(\'Invalid credentials\');\\n }\\n});\\n\\n// Start the Express server on port 3000\\napp.listen(3000, () => console.log(\'Express server running on http://localhost:3000\'));\\n\\n
So, while you could build everything from scratch with just Node.js, Express saves time, improves code organization, and helps you build scalable applications faster. This is why companies like ChatGPT, Substack, Salesforce, and Codesandbox use Express.js for their applications.
\\nWe’ve learned about the components in the MERN stack and what they do. Now, let’s understand how they work together by considering a real-life web development scenario: a user signing up for an application.
\\nEach technology in the MERN stack plays a specific role in this process:
\\nThe user fills a signup form with their name, email, and password. There are various ways to capture the user’s input, including the useState
hook, Context API, or third-party state management solutions.
When they click “Sign up,” React sends this data to the backend via a POST request.
\\nExpress.js receives the POST request from React, processes it, and parses the incoming JSON data. Middleware like express.json()
(built-in in Express 4.16+) or body-parser
helps handle request bodies.
The backend code validates the data, checks whether the email is already in use, and sanitizes the input. You can use express-validator
or custom validation logic.
If everything is correct, it hashes the password for security using the bcrypt
library or similar tools and sends it to MongoDB. If there are errors, the server responds with an error message.
MongoDB stores the user’s details, including the hashed password, in a “users” collection. You can use Mongoose, a popular ODM library, to define a schema for the user data and ensure the required fields, such as email and password, are always present.
\\nOnce the user data is stored, the backend sends a “success” response.
\\nIf the signup is successful, Express sends a response to React, confirming account creation. React then updates the UI and redirects the user to their dashboard.
\\nThere’s so much more that goes into a signup flow, but this example gives you an idea of how the MERN stack can help you achieve such a flow.
\\nThere’s no limit to what you can build with the MERN stack — except maybe a time machine. Common applications include:
\\nHere’s how Expedia uses the MERN stack to deliver personalized travel recommendations:
\\nSega uses the MERN stack to personalize gaming experiences:
\\nVerizon uses the MERN stack to power 5G and IoT data:
\\nCoinbase uses the MERN stack to scale cryptocurrency trading:
\\neBay uses the MERN stack to handle high-volume transactions:
\\nBenefits of working with the MERN stack include:
\\nLike every technological approach, the MERN stack has its limitations and disadvantages, including:
\\nTo show the MERN stack in action, let’s talk through how to build a full-stack to-do app where users can create, read, and delete to-do items.
\\nTo follow along with this step-by-step guide, you’ll need:
\\nYou can structure the app to suit your preferences. However, if you’d like to use my folder structure, use the image below as a reference:
\\nHere’s the GitHub repo for the complete app if you’d like to jump straight to the code.
\\nFirst, we need to define the structure of our to-do items. Create a todoSchema.js
file and update it with the below code:
const mongoose = require(\'mongoose\');\\n\\n// Define the Todo schema with a label and status field\\nconst todoSchema = new mongoose.Schema({\\n label: { type: String, required: true },\\n status: { type: String, enum: [\'pending\', \'completed\'], default: \'pending\' },\\n});\\n\\n// Create and export the Todo model\\nconst Todo = mongoose.model(\'Todo\', todoSchema);\\nmodule.exports = Todo;\\n\\n
The code above does the following:
\\nNow, let’s set up our backend server using Node.js, Express, and MongoDB.
\\n\\nRun the following command to install the required packages:
\\nnpm install express mongoose cors body-parser\\n\\n
Create a server.js
file and add the following:
const mongoose = require(\'mongoose\');\\nconst express = require(\'express\');\\nconst cors = require(\'cors\');\\nconst bodyParser = require(\'body-parser\');\\nconst Todo = require(\'./models/todoSchema\');\\n\\nconst app = express();\\n\\n// Middleware setup\\napp.use(cors());\\napp.use(bodyParser.json());\\n\\n// Connect to MongoDB\\nconst dbURI = \'your-mongodb-uri-here\';\\nmongoose\\n .connect(dbURI, { useNewUrlParser: true, useUnifiedTopology: true })\\n .then(() => {\\n app.listen(3001, () => {\\n console.log(\'Server is running on port 3001 and connected to MongoDB\');\\n });\\n })\\n .catch((error) => {\\n console.error(\'Failed to connect to MongoDB:\', error);\\n });\\n\\n// Routes for CRUD operations\\n\\n// Get all todos\\napp.get(\'/todos\', async (req, res) => {\\n try {\\n const todos = await Todo.find();\\n res.json(todos);\\n } catch (error) {\\n res.status(500).json({ message: \'Unable to retrieve todos\', error });\\n }\\n});\\n\\n// Create a new todo\\napp.post(\'/todos\', async (req, res) => {\\n try {\\n const { label, status } = req.body;\\n const todo = new Todo({ label, status });\\n const savedTodo = await todo.save();\\n res.status(201).json({ message: \'Todo successfully created\', todo: savedTodo });\\n } catch (error) {\\n res.status(500).json({ message: \'Unable to create todo\', error });\\n }\\n});\\n\\n// Update an existing todo\\napp.put(\'/todos/:id\', async (req, res) => {\\n try {\\n const { id } = req.params;\\n const { label, status } = req.body;\\n const updatedTodo = await Todo.findByIdAndUpdate(\\n id,\\n { label, status },\\n { new: true, runValidators: true }\\n );\\n if (!updatedTodo) {\\n return res.status(404).json({ message: \'Todo not found\' });\\n }\\n res.json({ message: \'Todo successfully updated\', todo: updatedTodo });\\n } catch (error) {\\n res.status(500).json({ message: \'Unable to update todo\', error });\\n }\\n});\\n\\n// Delete a todo\\napp.delete(\'/todos/:id\', async (req, res) => {\\n try {\\n const { id } = req.params;\\n const deletedTodo = await Todo.findByIdAndDelete(id);\\n if (!deletedTodo) {\\n return res.status(404).json({ message: \'Todo not found\' });\\n }\\n res.json({ message: \'Todo successfully deleted\', todo: deletedTodo });\\n } catch (error) {\\n res.status(500).json({ message: \'Unable to delete todo\', error });\\n }\\n});\\n\\nmodule.exports = app;\\n\\n
This step sets up the backend server and API endpoints for handling to-do tasks. It:
\\nCreate a TodoApp.js
file and update it with the following code:
<div className=\\"min-h-screen bg-gray-100 flex items-center justify-center\\">\\n <div className=\\"bg-white p-6 rounded-lg shadow-md w-full max-w-md\\">\\n <h1 className=\\"text-2xl font-bold mb-4 text-center text-orange-500\\">Todo App</h1>\\n <form onSubmit={handleSubmit} className=\\"flex mb-4\\">\\n <input\\n type=\\"text\\"\\n value={newTask}\\n onChange={(e) => setNewTask(e.target.value)}\\n placeholder=\\"Type todo here...\\"\\n className=\\"flex-1 px-3 py-2 border rounded-l-md focus:outline-none\\"\\n />\\n <button type=\\"submit\\" className=\\"bg-orange-500 text-white px-4 py-2 rounded-r-md hover:bg-orange-600 focus:outline-none\\">\\n Add\\n </button>\\n </form>\\n <ul>\\n {tasks.map((task) => (\\n <li\\n key={task._id}\\n className={\\n \\\\`flex items-center justify-between p-2 mb-2 rounded-md \\\\${task.status === \'completed\' ? \'bg-orange-100\' : \'bg-gray-50\'}\\\\`\\n }\\n >\\n <input\\n type=\\"checkbox\\"\\n checked={task.status === \'completed\'}\\n onChange={() => toggleCompletion(task._id, task.status)}\\n className=\\"form-checkbox h-5 w-5 text-orange-500\\"\\n />\\n <span\\n className={\\n \\\\`flex-1 ml-2 cursor-pointer \\\\${task.status === \'completed\' ? \'line-through text-gray-500\' : \'\'}\\\\`\\n }\\n onClick={() => toggleCompletion(task._id, task.status)}\\n >\\n {task.label}\\n </span>\\n <button\\n onClick={() => handleDelete(task._id)}\\n className=\\"text-red-500 hover:text-red-700 focus:outline-none\\"\\n aria-label=\\"Delete\\"\\n >\\n X\\n </button>\\n </li>\\n ))}\\n </ul>\\n </div>\\n</div>\\n\\n
The code above does the following:
\\nuseState
) to store the list of tasks and the new task inputuseEffect
This is a basic project demonstrating how to build an app with the MERN stack. You can create something more complex or extend this further by adding user authentication, drag-and-drop sorting, and also deploying it to your preferred platform.
\\nFor a deeper dive, check out our comprehensive tutorial on developing a simple CRUD application from scratch using the MERN stack.
\\nThe MERN stack is a popular choice for full-stack JavaScript development. However, it’s not the only alternative JavaScript stack. Let’s see how they “stack” up against each other — pun intended:
\\nThe first and most obvious difference between the MERN and MEAN stacks is their frontend framework. MERN uses React as its frontend framework, while Angular handles MEAN’s frontend. While both tools handle the UI, they have their distinctions.
\\nReact is a flexible, non-opinionated framework, while Angular is heavily opinionated and comes with conventions that dictate how applications should be structured. Angular provides more structure, but it also comes with a steeper learning curve.
\\nMEAN is well-suited for enterprise applications that need strict architectural guidelines, while MERN is better for projects that need more flexibility.
\\nMERN and MEVN are similar, except for their frontend framework, as MEVN uses Vue.js. Vue.js is known for its simplicity and ease of learning compared to React, making it a great choice for smaller projects.
\\nReact has a larger ecosystem and more third-party components than Vue.js. However, Vue.js is easier to learn and is more beginner-friendly.
\\nA core difference between the MERN stack and JAMstack (JavaScript, APIs, and Markup) is their purpose. While MERN is tailored towards building data-intensive web apps, the JAMstack approach is used to create static or content-heavy sites — with little or no dynamic interactions — like blogs, documentation sites, and ecommerce stores, where speed and SEO are major priorities.
\\nUnlike the MERN stack, which is a collection of tools you should work with, JAMstack doesn’t suggest any technologies. Instead, it’s an architectural approach that deals with serving prebuilt static files through a CDN to reduce server load and improve performance.
\\nJAMstack uses JavaScript for interactivity, APIs for backend services, and Markdown or headless CMS solutions for content management.
\\nWhen it comes to MERN vs. PERN, the database is the differentiator, as the PERN stack uses PostgreSQL as its database. PostgreSQL is an open-source relational database that uses SQL for queries. It is currently the most popular database out there for the second year in a row.
\\nPostgreSQL is a schema-based database that supports structured data. This means the structure of the data must be defined before it is stored. Meanwhile, MongoDB is schema-less and offers greater flexibility on how you store and modify your data.
\\nThe T3 stack takes a completely different approach from the other stacks we’ve explored. It introduces a completely new set of tools and is built around TypeScript, Next.js, tRPC, Tailwind CSS, and Prisma. Companies like Zoom are already using this stack in production.
\\nThe T3 stack was created as a modern alternative that uses the latest technologies and prioritizes type safety and developer experience.
\\nHowever, this stack is not as popular or battle-tested as other stacks. I recommend experimenting with it on personal projects before adopting it for larger-scale or production applications. Also, review online discussions to learn what the community is saying about the T3 stack.
\\nHere’s a tabular summary of how the MERN stack compared with other JavaScript stacks:
\\nFeature | \\nMERN Stack (MongoDB, Express.js, React, Node.js) | \\nMEAN Stack (MongoDB, Express.js, Angular, Node.js) | \\nMEVN Stack (MongoDB, Express.js, Vue.js, Node.js) | \\nJAMstack (JavaScript, APIs, Markup) | \\nPERN Stack (PostgreSQL, Express.js, React, Node.js) | \\nT3 Stack (TypeScript, Next.js, tRPC, Tailwind, Prisma) | \\n
---|---|---|---|---|---|---|
Learning curve | \\nModerate | \\nSteep | \\nEasy | \\nModerate | \\nModerate | \\nModerate | \\n
Flexibility | \\nHigh | \\nLow (Opinionated) | \\nHigh | \\nHigh | \\nModerate | \\nHigh | \\n
Best for | \\nDynamic web apps, SPAs | \\nEnterprise apps, structured projects | \\nSmall and large-scale projects, beginner-friendly apps | \\nStatic sites, content-heavy apps | \\nApps needing relational data & SQL | \\nType-safe, modern development | \\n
Ecosystem & support | \\nLarge community, many libraries | \\nStrong Angular ecosystem | \\nStrong community | \\nLarge ecosystem | \\nLarge ecosystem | \\nLarge and growing community | \\n
SEO | \\nModerate | \\nModerate | \\nModerate | \\nHigh (Pre-built static files) | \\nModerate | \\nHigh (Server-side rendering) | \\n
Use in production | \\nWidely used | \\nCommon in enterprise settings | \\nUsed for simple to large projects | \\nPopular for blogs & eCommerce | \\nPopular in data-heavy apps | \\nNot widely adopted yet | \\n
Dynamic vs static content | \\nDynamic | \\nDynamic | \\nDynamic | \\nStatic | \\nDynamic | \\nDynamic & static (SSG & SSR) | \\n
Third-party libraries | \\nExtensive React ecosystem | \\nAngular has built-in solutions | \\nVue has a growing library base | \\nUses APIs & third-party services | \\nStrong SQL-based tool support | \\nType-safe libraries available | \\n
Data binding | \\nOne-way (React) | \\nTwo-way (Angular) | \\nTwo-way (Vue) | \\nAPI-based | \\nOne-way (React) | \\nOne-way (React-based) | \\n
Deploying a MERN stack app requires hosting for the frontend (React), backend (Node.js and Express), and database (MongoDB), which can make it complex. However, it’s not impossible. Let’s explore several deployment routes you can use.
\\nCloud platforms like Vercel, AWS, Heroku, Azure, and Render are one of the most straightforward deployment methods. They handle most of the infrastructure and provide automatic scaling, CI/CD pipelines, and managed databases with minimal configuration.
\\nIf you need greater control, then VPS hosting on platforms like DigitalOcean, AWS EC2, or Linode is the way to go. It requires manual setup, but offers provides greater flexibility and cost control, making it a good choice for growing applications with specific backend requirements.
\\nDocker allows you to package the frontend, backend, and database into containers, while Kubernetes helps orchestrate, scale, and manage these containers efficiently. Learn more about Docker for frontend developers and how to deploy a React app to Kubernetes using Docker.
\\nWe’ve learned a lot about the MERN stack and seen various reasons why it’s a great stack to work with. However, at the end of the day, it’s not suitable for every project — that’s why alternative stacks exist.
\\nLet’s explore when to use the MERN stack and when to consider other options.
\\nThe MERN stack is great for building:
\\nThe MERN stack is less suitable for building:
\\nimage
and script
component that further improve SEO and performanceIf you’ve made it this far, congratulations! You’ve learned why the MERN stack is so popular and the benefits it provides.
\\nNow, let’s explore the answer to a question many developers ask: “How do I become a MERN stack developer?”
\\nThe first step to becoming a MERN stack developer is gaining the right knowledge, which involves learning the following technologies — preferably in descending order, as your understanding of one will act as a foundation for further learning:
\\nNote: In truth, this is just a low-level overview of a MERN stack developer roadmap. For a more detailed guide, check out the Roadmap.sh full-stack developer roadmap.
\\nThere are varying opinions about certificates in the tech space. Some believe they’re not needed, and others believe they are.
\\nRegardless of what segment you fall into, one thing is certain: it doesn’t hurt to grab a few certificates, especially in today’s competitive job market. Anything that gives you an edge above other job applicants is definitely good. These are some certifications you should consider getting:
\\nMany roles require MERN stack expertise, but the most common ones include:
\\nPotential employers expect MERN stack developers to have numerous skills. Some of them include building responsive UIs with React, handling RESTful APIs with Express.js, managing authentication and authorization with Nodejs, understanding MongoDB’s document-based structure, securing databases, using Git for version control, and deploying applications on platforms like Vercel, Netlify, Heroku, or DigitalOcean.
\\nOne of the best ways to learn is by studying what others have built. You can dive into these projects, study their codebase, and gain inspiration:
\\nExplore the mern-project GitHub topic to see more great projects and templates.
\\nThe World Wide Web is saturated with many resources for learning about the MERN stack — and anything, really — but these are some great ones to start with:
\\nWhile knowing the roadmap for becoming a MERN stack developer is important, understanding the job market demand is just as critical. Many people pursue MERN stack development not just for personal projects but also to secure a job in the field. So, how in-demand are MERN stack developers today?
\\nThere are no recent stats specifically tracking the demand for MERN stack developers. However, after searching various job boards — Glassdoor, Himalayas, Indeed, LinkedIn, and Y Combinator’s Work at a Startup — for terms like “mern stack developer,” “fullstack developer,” and “web developer,” I noticed that less than 25 percent of job postings requested the MERN stack, and these are conservative figures.
\\nThis suggests that demand for the MERN stack as a whole has declined compared to a few years or a decade ago.
\\nThe insights I gathered from the job boards match Google search trends, which show that the search volume for “mern stack developers” far outweighs that of “full stack developer”:
\\nWhy the drop in demand? The web development space is consistently evolving as new tools emerge. Certain technologies gain popularity while others fade. For example, jQuery was once the industry standard for JavaScript libraries, which is no longer the case.
\\nOne thing is obvious: putting all your eggs in one basket is risky. The MERN stack as a whole may not be as requested, but individual technologies within it are still highly relevant.
\\nFor example, a company might need MongoDB or NoSQL experience, and another might require React and Node.js but use PostgreSQL instead of MongoDB.
\\nSo, while you may not see “MERN stack” as a job requirement, you will find React, Node, or MongoDB appearing in different combinations across job descriptions.
\\nBecause new tools emerge constantly, limiting yourself to a single stack can affect your career path. Instead, focus on:
\\nAt the end of the day, startups and midsized companies will always need databases, frontend frameworks, and server-side libraries — but they won’t always choose the tools in the MERN stack. Some will prefer PostgreSQL, Vue, Next.js, or something entirely new. The key is to become a versatile developer who can adapt and learn new technologies quickly to stay ahead of trends.
\\nWhile the MERN stack may not be as in-demand as before, certain types of companies still hire MERN developers. Startups and mid-sized tech companies, especially those building MVPs and scalable web applications, will likely choose the MERN stack because of its JavaScript-only ecosystem, which speeds up development.
\\nIf you’re targeting MERN stack roles, smaller tech companies, digital agencies, and fast-growing startups are where you’re likely to find the most opportunities.
\\n\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nFrontend architecture patterns are reusable guidelines that structure any one software product’s implementation on… you guessed it, the frontend. Modern software development teams use a variety of frontend architecture patterns (monolithic, modular, and component-based, just to name a few). The choice of which pattern to use depends on project complexity, scalability, maintainability, product delivery concerns, and development preferences.
\\nYour frontend architecture is the foundation of the frontend codebase you’ll maintain during the product’s lifetime, so choosing the optimal frontend architecture is a must.
\\nIn this article, we’ll discuss all popular frontend architecture patterns, their strengths, weaknesses, and usage examples. Use this article as a guide to select the optimal frontend architecture for your next sustainable software product!
\\nWe can theoretically build any software product using any available frontend architecture pattern. That’s because an architecture pattern offers a general, reusable way to structure a UI implementation — not a strict, specific development ruleset.
\\nHowever, it should be noted during practical frontend development, that not all architectural patterns offer the same developer productivity. These patterns help us achieve development goals by satisfying business requirements.
\\nSo, we should always select the optimal architecture based on project complexity, scalability, maintainability, product delivery concerns (cost and time), and developer preferences.
\\nChoosing the optimal frontend architecture will:
\\nIn this article, we’ll discuss the following popular, modern architecture patterns that you can use to build web, mobile, or desktop software products:
\\nLet’s explore each architecture:
\\nMonolithic architecture hosts the entire app frontend’s interfaces, resources, and dependency module sources in just one project codebase. Monolithic app codebases typically use MVC (model-view-controller) alongside components, widgets, layout fragments, and other UI code decomposition strategies to organize code. The point being — all UI source files are stored within the monolithic code repository.
\\nMonolithic frontend architecture suits simpler software frontends maintained by small to medium-sized software development teams. This pattern works best in scenarios where developers prioritize faster initial project delivery over scalability and future codebase growth prevention.
\\nFor example, a small software team might choose the monolithic frontend pattern to build the frontend of a medium-sized enterprise app.
\\nMost open source SPAs (single-page applications), multi-page web apps, and other frontend projects hosted on a single GitHub code repository typically use monolithic architecture with component-based, MVC, or traditional page-based codebase arrangement. For example, this to–do app source code uses a monolithic frontend with the MVC pattern:
\\nModular architecture decomposes the codebase into separate, maintainable, and installable modules. Developers split the primary app code into sub-modules based on functionality, so they can develop, test, and deploy them as isolated entities without creating collaborative development conflicts.
\\nThe modular pattern turns a single monolithic code repository into separately maintained code repositories, but the resulting software UI is still considered a monolith. This is because modules get integrated to construct the final app.
\\n\\nThe modular pattern is a strategy to improve the maintainability and collaboration factors of large monolithic codebases without going through expensive rewrites. Developers who can invest the initial development time for future collaboration and maintainability benefits choose the modular architecture.
\\nFor example, a medium-sized software team might choose modular architecture for a medium-scale ecommerce app to create and maintain shopping, checkout, product management, and financial modules.
\\nWeb developers use Lerna-like monorepo management tools to implement productive modular frontends. This sample Lerna project guides how to implement the modular architecture pattern for a simple web app.
\\nThe component-based architecture recommends using reusable components to construct the software product interface. Components host a template, UI logic, and styles and developers usually divide large UIs into components based on functionality and relevance. A component-based app renders a screen by constructing a component tree and passes messages between components to implement interactivity. The component-based architecture is the fundamental concept in popular frontend libraries like React, Vue, Angular, and Svelte.
\\nThe component-based architecture is the foundation of popular frontend libraries, so developers must adhere to it to develop software UIs per those libraries. Developers who strive for code reusability, render-tree-like code structure, and component-based unit tests choose the component-based architecture.
\\nFor example, a mobile app developer might use component-based architecture with the React Native framework to build a social media app.
\\nEvery modern frontend library recommends that developers build apps using component-based architecture. Browse any React, Angular, Vue, and Svelte apps to check the component-based architecture pattern. For example, this simple React Native chat app source uses the component-based architecture:
\\nThe microfrontend architecture motivates developers to divide the app frontend into isolated, maintainable frontend projects, known as microfrontends. Developers can create microfrontends with UI segments, components, modules, or even entire app frontends based on the complexity of the product and development preferences.
\\nA software system that follows the microfrontend pattern has two types of separate projects:
\\nMicrofrontend architecture provides solutions for maintainability, scalability, and deployment issues for complex and large-scale monolithic frontend projects. Microfrontend architecture also brings impressive code reusability benefits when it comes to separate app frontend instances.
\\nMicrofrontend architecture is the recommended approach for complex projects maintained by large development teams. In a real-life application, a company might build a complete ERP (enterprise resource planning) app by creating microfrontends for different ERP submodules.
\\nThe open source community doesn’t have many fully-featured, complete, and up-to-date microfrontend projects available to see, seeing as the microfrontend architecture is often used in closed-source, large enterprise systems, but you can browse this GitHub repository to see a simple microfrontend app built with React:
\\nMeta (formerly Facebook) introduced the Flux architecture for developing client-side web applications. The Flux architecture pattern introduces a better solution for application state and data flow in complex component-based apps. Flux simplifies the decentralized, bidirectional, complex nature of the application state and data flow by creating centralized state stores and introducing a unidirectional data flow. Flux introduces three fundamental elements for constructing the app frontend:
\\nRedux-like state management libraries use a simplified version of the Flux architecture. Redux uses Flux without multiple stores and uses the reducers concept within the store element.
\\nFlux offers a different concept for handling application state and controlling data flow by competing with existing MVC-like patterns. Though Flux adds another abstraction layer for your app logic and introduces more boilerplate code, it impressively reduces state handling and data flow complexity in component-based apps. All in all, the Flux architecture is suitable for medium-sized or large component-based apps with complex, frequently updated states.
\\nDevelopers might use Flux (via Redux or similar libraries) to develop a component-based frontend for a fully-featured live chat app or social media app.
\\nThe examples directory in the official Flux architecture documentation repository contains multiple examples of Flux in action. On the other hand, the Meta team recommends using Redux-like libraries that use a simplified implementation of the Flux architecture. See this sample to-do app source to understand the Flux architecture from Redux API usage:
\\nThe frontend architecture patterns we’ve discussed so far recommend a specific way to structure the app codebase to meet developer requirements and satisfy the organization’s goals. These patterns will affect codebase structure and arrangement, but they don’t restrict you from using other architecture.
\\nMost developers use hybrid or mixed architecture patterns, adhering to multiple architecture patterns. Here are some examples:
\\nFollowing strictly only a single frontend architecture pattern isn’t mandatory, so consider using multiple architectures based on your development preferences and organizational goals.
\\nThe following table summarizes key points and shows when to consider using each frontend architecture pattern:
\\nComparison factor | \\nMonolithic | \\nModular | \\nComponent-based | \\nMicrofrontend | \\nFlux | \\n
---|---|---|---|---|---|
Key development approach | \\nHosts every frontend source file within a single repository | \\nSeparates fronted source code into modules | \\nDivides the frontend code into reusable components | \\nDivides the frontend code into isolated apps or fully-featured components and loads them on-demand | \\nSeparates frontend source code into views, dispatchers, and stores | \\n
Usage in simple projects | \\nRecommended | \\nNot recommended (Increases complexity) | \\nRecommended | \\nNot recommended (Increases complexity) | \\nNot recommended (Increases complexity) | \\n
Usage in medium projects | \\nPartially recommended (Maintainable, but increases complexity) | \\nRecommended | \\nRecommended | \\nPartially recommended (Maintainable, but increases complexity) | \\nRecommended if developers prefer to use | \\n
Usage in large projects | \\nNot recommended (increases complexity) | \\nPartially recommended (Maintainable, but increases complexity) | \\nRecommended | \\nRecommended | \\nRecommended if developers prefer to use | \\n
Beginner-friendly? | \\nYes | \\nPerhaps | \\nYes | \\nNo | \\nNo | \\n
Initial product releases and demos | \\nFast since the architecture is simple | \\nFast but not so fast compared to monolithic due to initial setup | \\nFast since the architecture is simple | \\nSlow due to complicated initial setup | \\nFast but not so fast compared to pure component-based architecture due to initial boilerplate code and setup | \\n
In this article, we’ve explored popular frontend architecture patterns by discussing key points, strengths, weaknesses, usage scenarios, and example projects. We also went through a table that helps you choose the optimal architecture based on various development and organizational factors.
\\nThe architecture we decide to use will establish a foundation for your entire frontend source code. So, we should always select the optimal architecture to prevent expensive rewrites and refactorings in the future. There is no strict rule telling you to follow just one architecture pattern — you can adhere to multiple patterns and structure your frontend codebase based on your preferences and business requirements.
\\nMonolithic, modular, component-based, microfrontend, and Flux are the popular frontend architecture patterns that most software development teams use. Keep in mind that there’s still room for you to innovate your own architecture pattern by examining your frontend development requirements, just like how Meta developers came up with Flux.
\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nGraceful degradation is a design principle in software and system engineering that ensures a system continues functioning – albeit with reduced performance or features – when one or more of its components fail or encounter problems.
\\nRather than completely breaking down, the system “degrades gracefully” by maintaining core functionality and providing a minimally viable user experience. Which aspect is degraded depends on the kind of system/software.
\\nFor example, a mapping service might stop returning additional details about a city area because of a network slowdown but still will let the user navigate the areas of the map that have already been downloaded; a website might remain navigable and readable even if certain scripts, images, or advanced features don’t load, like webmail that will still let you edit your emails even if you are in airplane mode.
\\nThe concept of “graceful degradation” contrasts with “fail-fast” approaches, where a system immediately halts operations when it encounters a failure. Graceful degradation emphasizes resilience and user-centric design by ensuring critical services remain accessible during partial disruptions.
\\nAs usual, the code for this article is available on GitHub. We will use tags to follow our path along the “degradation” of the functionalities.
\\nTo support our explanation, we will use a simple application (written in Deno/Fresh but the language/framework is irrelevant in this article) that will invoke a remote API to get a fresh joke for the user.
\\nThe interface is pretty simple and the code can be found on the repository (at this tag in particular).
\\nThe islands\\\\Joke.tsx
file is a preact component responsible for displaying a random joke in a web interface. It uses the useState and useEffect Hooks
to manage the joke’s state and fetch data when the component mounts. The joke is fetched from the /api/joke
endpoint, and users can retrieve a new one by clicking a button. The component renders the joke along with a button that triggers fetching a new joke dynamically when clicked.
The routes\\\\api\\\\joke.ts
file defines an API endpoint that returns a random joke. It fetches a joke from an external API (for this example, we use a service but any other similar service is fine) and extracts the setup and punchline. The response is then formatted as a single string (setup + punchline
) and returned as a JSON response to the client.
The application doesn’t do much, but from an architectural point of view, it is comprised of two tiers: the frontend and the backend with the API. Our frontend is simple and cannot fail, but the backend, our “joke” API, can fail: it relies on an external service that is out of our control.
\\nLet’s look at the current version of the API:
\\nimport { FreshContext } from \\"$fresh/server.ts\\";\\n\\nexport const handler = async (_req: Request, _ctx: FreshContext): Promise<Response> => {\\n const res = await fetch(\\n \\"https://official-joke-api.appspot.com/random_joke\\",\\n );\\n const newJoke = await res.json();\\n\\n const body = JSON.stringify(newJoke.setup + \\" \\" + newJoke.punchline);\\n\\n return new Response(body);\\n};\\n\\n
The first kind of failure we will implement is aiming to randomly get a timeout on the external API call. Let’s modify the code:
\\nimport { FreshContext } from \\"$fresh/server.ts\\";\\n\\nexport const handler = async (\\n _req: Request,\\n _ctx: FreshContext,\\n): Promise<Response> => {\\n // Simulate a timeout by setting a timeout promise\\n const timeoutPromise = new Promise((resolve) =>\\n setTimeout(() => resolve(null), 200)\\n );\\n\\n // Fetch the joke from the external API\\n const fetchPromise = fetch(\\n \\"https://official-joke-api.appspot.com/random_joke\\",\\n );\\n\\n // Race the fetch promise against the timeout\\n const res = await Promise.race([fetchPromise, timeoutPromise]);\\n\\n if (res instanceof Response) {\\n const newJoke = await res.json();\\n const body = JSON.stringify(newJoke.setup + \\" \\" + newJoke.punchline);\\n return new Response(body);\\n } else {\\n return new Response(\\"Failed to fetch joke\\", { status: 500 });\\n }\\n};\\n\\n
In this new version, we add a timeoutPromise
that will “race
” with our external API call: if the external API answers in less than 200ms
(i.e. wins the race), we get a new joke, otherwise, we get null
as a result. This is disruptive – our frontend relies on the response from the API as a JSON object, and it gets a message (“Failed to fetch joke”) and a 500 HTTP error. In the browser, it will produce these effects:
The joke is not refreshed and you get an error message in the console because the message you get from the API is not a formatted JSON. To mitigate the random timeouts we injected in our API code, we can provide a safety net: when the fetch fails, we return a standard joke formatted as the frontend expects:
\\n...\\n\\n // Race the fetch promise against the timeout\\n const res = await Promise.race([fetchPromise, timeoutPromise]);\\n\\n if (res === null) {\\n // If the timeout wins, return a fallback response\\n const fallbackJoke = {\\n setup: \\"[cached] Why did the developer go broke?\\",\\n punchline: \\"Because they used up all their cache!\\",\\n };\\n const body = JSON.stringify(\\n fallbackJoke.setup + \\" \\" + fallbackJoke.punchline,\\n );\\n return new Response(body);\\n }\\n ...\\n\\n
To mitigate the effects of the failure we just created, we check the call has returned null; in such case, it comes in handy to have a fallbackJoke
that will be returned in the same format expected by the frontend. This simple mechanism has augmented the resilience of our API to a particular type of failure: the unpredictable timeout of the external API.
In the timeout example, the mechanism we deployed to mitigate still relies on the fact that the server with the external API is reachable. If you unplug the network cable from your PC (or activate airplane mode), you will see that the frontend will fail in a new way:
\\nThe reason is that the backend is not able to reach the external API server and thus returns an error to the backend (check the logs from Deno for more information). To mitigate this situation, we must modify the backend to be aware of the failure of the external API and then handle it by serving a fallback joke:
\\n...\\n // If the fetch completes in time, proceed as usual\\n if (res instanceof Response) {\\n const newJoke = await res.json();\\n const body = JSON.stringify(newJoke.setup + \\" \\" + newJoke.punchline);\\n return new Response(body);\\n } else {\\n throw new Error(\\"Failed to fetch joke\\");\\n }\\n } catch (_error) {\\n // Handle any other errors (e.g., network issues)\\n const errorJoke = {\\n setup: \\"[cached] Why did the API call fail?\\",\\n punchline: \\"Because it couldn\'t handle the request!\\",\\n };\\n const body = JSON.stringify(errorJoke.setup + \\" \\" + errorJoke.punchline);\\n return new Response(body, { status: 500 });\\n }\\n};\\n\\n
The mitigation relies on the fact that instead of returning a generic “Failed to fetch joke” message, we wrap the whole interaction with the external API server in a try/catch block. This block will let us handle the network failure by serving a local joke instead of an expressive error message. This is the final solution to the possible errors you can get on the backend, and it increases the system’s resilience.
\\nIn the previous section, we increased the resilience to failures but we also want to keep a user-centric approach as a part of the graceful degradation. At the moment, the user is not aware if the joke they get is fresh or not. To increase this knowledge, we will extend the JSON returned from the backend to keep track of the freshness of the joke. When the external API fails, the JSON that is returned to the frontend will state that the joke is not fresh (fresh
is false
):
const errorJoke = {\\n setup: \\"Why did the API call fail?\\",\\n punchline: \\"Because it couldn\'t handle the request!\\",\\n fresh: false\\n };\\n\\n
Otherwise, when the external API succeeds, we return a JSON object with the fresh
field set to true
:
if (res instanceof Response) {\\n const newJoke = await res.json();\\n newJoke.fresh = true;\\n const body = JSON.stringify(newJoke);\\n return new Response(body);\\n }\\n\\n
Now that the frontend receives the freshness of every joke, we just need to show it to the user:
\\nWhen the external API call fails, a message is shown in red, so the user knows what they are getting.
\\nIn this article, we explored the concept of graceful degradation, highlighting two mechanisms for mitigating system failures. We explored two principles for implementing graceful degradation: building resilient components to withstand failures and adopting a user-centric approach so users are aware of any limited functionalities of the system in case of failures.
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nThe web development landscape is shifting back toward server-side rendering and away from JavaScript-heavy client-side architectures. This trend has been fueled by tools like React Server Components and the app
directory in frameworks like Next.js, which simplifies server-side routing and rendering.
In response to this shift, tools like htmx are gaining popularity for building interactive web experiences with minimal JavaScript. The HTML-based htmx allows for server-side rendering using AJAX. In this article, we’ll explore how to build a high-performance website using htmx and Go, a backend language known for its speed and efficiency.
\\nhtmx is a lightweight JavaScript library that enables building large, dynamic sites with minimal reliance on client-side JavaScript.
\\nhtmx injects various AJAX-like attributes and is rendered to simple HTML on the server, which allows developers to achieve AJAX-like updates and dynamic interactions on the pages.
\\nLet’s see a quick example straight from the docs to demonstrate how htmx handles dynamic interactions:
\\n<button hx-post=\\"/clicked\\"\\n hx-trigger=\\"click\\"\\n hx-target=\\"#parent-div\\"\\n hx-swap=\\"outerHTML\\">\\n Click Me!\\n</button>\\n\\n
Here, a button element is given various attributes. When clicked, the hx-post=\\"/clicked\\"
attribute sends an HTTP POST request to the /clicked
API. Afterward, the button click will swap the targeted div with an ID of #parent-div
with the response received from the API.
This is how htmx handles typical dynamic interactions. As you can see, the page or the element in this case will be server-rendered, thus quite quick in terms of interactivity, while saving on client-side JavaScript bundles.
\\nGolang, or Go, is a high-performance, typed programming language. Its automatic garbage collection, efficient concurrency model, and rapid execution make it a popular choice for building scalable backends.
\\nSetting up a Go server is the first step in building a backend with Go. Go’s specification makes it easy to quickly spin up a server by using its built-in net/http
package. Assuming you have Go set up in your system, you can create a Go project in a directory and start by creating a file called main. go
.
In this file, you have to import the fmt
for string and log formatting and net/http
for initiating the server:
package main\\nimport (\\n \\"fmt\\"\\n \\"net/http\\"\\n)\\n\\n
This creates the main
function with the following server code:
package main\\nimport (\\n\\"fmt\\"\\n\\"net/http\\"\\n)\\nfunc main() {\\nhttp.HandleFunc(\\"/\\", func(w http.ResponseWriter, r *http.Request) {\\nfmt.Fprintln(w, \\"Hello, World!\\")\\n})\\nfmt.Println(\\"Server running at http://localhost:8080\\")\\nhttp.ListenAndServe(\\":8080\\", nil)\\n}\\n\\n
This will run your server at 8080 port and print “Hello World!” in your terminal.
\\nYou can go a step further and, instead of printing the log, you can render a simple UI by changing the main
function:
http.HandleFunc(\\"/\\", func(w http.ResponseWriter, r *http.Request) {\\n// Set the Content-Type header to HTML\\nw.Header().Set(\\"Content-Type\\", \\"text/html\\")\\n// Write an HTML response\\nfmt.Fprintln(w, `\\n<!DOCTYPE html>\\n<html lang=\\"en\\">\\n<head>\\n<meta charset=\\"UTF-8\\">\\n<meta name=\\"viewport\\" content=\\"width=device-width, initial-scale=1.0\\">\\n<title>Hello, World</title>\\n</head>\\n<body>\\n<h1>Hello, World!</h1>\\n<p>Welcome to your first Go web server.</p>\\n</body>\\n</html>\\n`)\\n})\\n\\n
Now, at the root /
, this HTML will be rendered instead. The key thing to note here is that w.Header().Set(\\"Content-Type\\", \\"text/html\\")
sets the response header to indicate the content type is HTML. Finally, you can execute this file by running the command go run main.go
where main.go
is the filename.
You can use htmx to render the same HTML snippet with htmx-specific attributes that will allow you to add interactions to the page.
\\nYou can integrate htmx in this project by just using a CDN, and including it in your script wherever you are rendering:
\\n<script src=\\"https://unpkg.com/htmx.org\\"></script>\\n\\n
In this example, you can update your main()
function to include the htmx syntax:
func main() {\\n// Handler for the main page\\nhttp.HandleFunc(\\"/\\", func(w http.ResponseWriter, r *http.Request) {\\nw.Header().Set(\\"Content-Type\\", \\"text/html\\")\\nfmt.Fprintln(w, `\\n<!DOCTYPE html>\\n<html lang=\\"en\\">\\n<head>\\n<meta charset=\\"UTF-8\\">\\n<meta name=\\"viewport\\" content=\\"width=device-width, initial-scale=1.0\\">\\n<title>HTMX Demo</title>\\n<script src=\\"https://unpkg.com/htmx.org\\"></script>\\n</head>\\n<body>\\n<h1>HTMX Demo</h1>\\n<div id=\\"content\\">\\n<p>Click the button to fetch updated content!</p>\\n</div>\\n<button hx-get=\\"/update\\" hx-target=\\"#content\\" hx-swap=\\"innerHTML\\">\\nGet Updated Content\\n</button>\\n</body>\\n</html>\\n`)\\n})\\n\\n
You can see the script
tag that now allows you to write the htmx-specific syntax and hence the attributes. What is really happening here?
Well, as you have seen in the first section of this article, the hx-get=\\"/update\\"
attribute will get a response from the /update
API and will swap the innerHTML
due to hx-swap=\\"innerHTML\\"
. This new response will update the div with an ID of #content\\"
due to the hx-target=\\"#content\\"
attribute.
For all of this to happen, you need to have the /update
endpoint that will send a content response that is supposed to replace the existing HTML content. In Go, you can create such a handler like so:
http.HandleFunc(\\"/update\\", func(w http.ResponseWriter, r *http.Request) {\\nw.Header().Set(\\"Content-Type\\", \\"text/html\\")\\nfmt.Fprintln(w, `<p>Content updated at: `+r.RemoteAddr+`</p>`)\\n})\\n\\n
This handler will send the HTML response of <p>Content updated at: +r.RemoteAddr+</p>
, i.e. printing the IP address of the user.
Now that you understand the basic implementation with Go, we’re going to go a little deeper and build a small to-do list app.
\\nFirst things first, create a directory/folder on your system and create a main.go
file. Alternatively, you can run the following code:
go mod init todo-app\\n\\n
Here, the todo-app will be your project name, which will create the main.go
file where you will write all the backend logic. Now, you need something to store your entries to ensure data persistence.
You can use SQL to store your creds and the items that your to-do app will contain. You’ll need to import this library by running the following:
\\ngo get -u github.com/go-sql-driver/mysql\\n
Finally, in your main.go
file, import the following libraries:
package main\\nimport (\\n \\"database/sql\\"\\n \\"fmt\\"\\n \\"html/template\\"\\n \\"log\\"\\n \\"net/http\\"\\n _ \\"github.com/go-sql-driver/mysql\\"\\n)\\n\\n
Now, you need to define the schema for your to-do items. Any to-do items will have an ID and a status to track if it is completed or not. In Go, you can have this schema typed in as follows:
\\ntype Todo struct {\\n ID int `json:\\"id\\"`\\n Title string `json:\\"title\\"`\\n Completed bool `json:\\"completed\\"`\\n}\\n\\n
With the schema set, you now need to have an indexHandler
function that will render an HTML file to the browser, and with that, the rest of your backend logic will mutate the rendered HTML based on the new to-do items or its status of being completed:
func indexHandler(w http.ResponseWriter, r *http.Request) {\\n tmpl, err := template.ParseFiles(\\"index.html\\")\\n if err != nil {\\n http.Error(w, \\"Unable to load index.html\\", http.StatusInternalServerError)\\n return\\n}\\n tmpl.Execute(w, nil)\\n}\\n\\n
With indexHandler
added, the next step is to define API endpoints and their corresponding functions:
getTodosHandler
: Gets all the to-do items from the SQL backendaddTodoHandler
: Adds an input from the user by entering the HTML input fielddeleteTodoHandler
: Deletes items by handling the delete
button clickcompleteTodoHandler
: Toggles the item status and marks it as completedYou can find the complete main.go
backend logic here:
package main\\nimport (\\n \\"database/sql\\"\\n \\"fmt\\"\\n \\"html/template\\"\\n \\"log\\"\\n \\"net/http\\"\\n _ \\"github.com/go-sql-driver/mysql\\"\\n)\\ntype Todo struct {\\n ID int `json:\\"id\\"`\\n Title string `json:\\"title\\"`\\n Completed bool `json:\\"completed\\"`\\n}\\nvar db *sql.DB\\nfunc main() {\\n var err error\\n dsn := \\"root:Thecityofroma@123@tcp(localhost:3306)/todo_app\\"\\n db, err = sql.Open(\\"mysql\\", dsn)\\n if err != nil {\\n log.Fatalf(\\"Error connecting to the database: %v\\", err)\\n}\\n defer db.Close()\\n if err = db.Ping(); err != nil {\\n log.Fatalf(\\"Error pinging the database: %v\\", err)\\n}\\n http.HandleFunc(\\"/\\", indexHandler)\\n http.HandleFunc(\\"/api/todos\\", func(w http.ResponseWriter, r *http.Request) {\\n if r.Method == http.MethodGet {\\n getTodosHandler(w, r)\\n} else if r.Method == http.MethodPost {\\n addTodoHandler(w, r)\\n} else {\\n http.Error(w, \\"Method not allowed\\", http.StatusMethodNotAllowed)\\n}\\n})\\n http.HandleFunc(\\"/api/delete-todo\\", deleteTodoHandler)\\n http.HandleFunc(\\"/api/complete-todo\\", completeTodoHandler)\\n log.Println(\\"Server is running on http://localhost:8080\\")\\n if err := http.ListenAndServe(\\":8080\\", nil); err != nil {\\n log.Fatalf(\\"Error starting server: %v\\", err)\\n}\\n}\\nfunc indexHandler(w http.ResponseWriter, r *http.Request) {\\n tmpl, err := template.ParseFiles(\\"index.html\\")\\n if err != nil {\\n http.Error(w, \\"Unable to load index.html\\", http.StatusInternalServerError)\\n return\\n}\\n tmpl.Execute(w, nil)\\n}\\nfunc renderTodoHTML(todo Todo) string {\\n completedStatus := \\"\\"\\n bgColor := \\"white\\"\\n buttonText := \\"Complete\\"\\n if todo.Completed {\\n completedStatus = \\" (Completed)\\"\\n bgColor = \\"#f0f0f0\\" // Light grey background for completed tasks\\n buttonText = \\"Uncomplete\\"\\n}\\n return fmt.Sprintf(`\\n<div class=\\"todo-item\\" id=\\"todo-%d\\" style=\\"background-color: %s;\\">\\n<p><strong>%s</strong>%s</p>\\n<button hx-post=\\"/api/delete-todo\\"\\nhx-target=\\"#todo-%d\\"\\nhx-swap=\\"outerHTML\\"\\nhx-include=\\"#todo-%d [name=id]\\"\\ntype=\\"button\\">\\nDelete\\n</button>\\n<button hx-post=\\"/api/complete-todo\\"\\nhx-target=\\"#todo-%d\\"\\nhx-swap=\\"outerHTML\\"\\nhx-include=\\"#todo-%d [name=id]\\"\\ntype=\\"button\\">\\n %s\\n</button>\\n<input type=\\"hidden\\" name=\\"id\\" value=\\"%d\\">\\n</div>`, todo.ID, bgColor, todo.Title, completedStatus, todo.ID, todo.ID, todo.ID, todo.ID, buttonText, todo.ID)\\n}\\nfunc getTodosHandler(w http.ResponseWriter, r *http.Request) {\\n rows, err := db.Query(\\"SELECT id, title, completed FROM todos\\")\\n if err != nil {\\n http.Error(w, \\"Unable to fetch TODO items\\", http.StatusInternalServerError)\\n return\\n}\\n defer rows.Close()\\n var todos []Todo\\n for rows.Next() {\\n var todo Todo\\n if err := rows.Scan(&todo.ID, &todo.Title, &todo.Completed); err != nil {\\n http.Error(w, \\"Error reading TODO items\\", http.StatusInternalServerError)\\n return\\n}\\n todos = append(todos, todo)\\n}\\n var html string\\n for _, todo := range todos {\\n html += renderTodoHTML(todo)\\n}\\n w.Header().Set(\\"Content-Type\\", \\"text/html\\")\\n w.Write([]byte(html))\\n}\\nfunc addTodoHandler(w http.ResponseWriter, r *http.Request) {\\n if err := r.ParseForm(); err != nil {\\n http.Error(w, \\"Invalid form data\\", http.StatusBadRequest)\\n return\\n}\\n title := r.FormValue(\\"title\\")\\n if title == \\"\\" {\\n http.Error(w, \\"Title is required\\", http.StatusBadRequest)\\n return\\n}\\n // Insert new TODO into the database\\n result, err := db.Exec(\\"INSERT INTO todos (title, completed) VALUES (?, false)\\", title)\\n if err != nil {\\n http.Error(w, \\"Unable to add TODO item\\", http.StatusInternalServerError)\\n return\\n}\\n // Get the last inserted ID\\n id, err := result.LastInsertId()\\n if err != nil {\\n http.Error(w, \\"Unable to fetch inserted ID\\", http.StatusInternalServerError)\\n return\\n}\\n // Fetch the newly added TODO from the database\\n todo := Todo{\\n ID: int(id),\\n Title: title,\\n Completed: false,\\n}\\n // Render the newly added TODO item as HTML\\n html := renderTodoHTML(todo)\\n // Return the generated HTML for the new todo\\n w.Header().Set(\\"Content-Type\\", \\"text/html\\")\\n w.Write([]byte(html))\\n}\\n// deleteTodoHandler deletes a TODO item by ID.\\nfunc deleteTodoHandler(w http.ResponseWriter, r *http.Request) {\\n if r.Method != http.MethodPost {\\n http.Error(w, \\"Method not allowed\\", http.StatusMethodNotAllowed)\\n return\\n}\\n if err := r.ParseForm(); err != nil {\\n http.Error(w, \\"Invalid form data\\", http.StatusBadRequest)\\n return\\n}\\n id := r.FormValue(\\"id\\")\\n if id == \\"\\" {\\n http.Error(w, \\"ID is required\\", http.StatusBadRequest)\\n return\\n}\\n // Execute the delete query\\n _, err := db.Exec(\\"DELETE FROM todos WHERE id = ?\\", id)\\n if err != nil {\\n http.Error(w, \\"Unable to delete TODO item\\", http.StatusInternalServerError)\\n return\\n}\\n // Respond with an empty string to indicate successful deletion.\\n w.Header().Set(\\"Content-Type\\", \\"text/html\\")\\n w.Write([]byte(\\"\\"))\\n}\\n// completeTodoHandler toggles the completed status of a TODO item by ID.\\nfunc completeTodoHandler(w http.ResponseWriter, r *http.Request) {\\n if r.Method != http.MethodPost {\\n http.Error(w, \\"Method not allowed\\", http.StatusMethodNotAllowed)\\n return\\n}\\n if err := r.ParseForm(); err != nil {\\n http.Error(w, \\"Invalid form data\\", http.StatusBadRequest)\\n return\\n}\\n id := r.FormValue(\\"id\\")\\n if id == \\"\\" {\\n http.Error(w, \\"ID is required\\", http.StatusBadRequest)\\n return\\n}\\n // Toggle the completed status\\n var completed bool\\n err := db.QueryRow(\\"SELECT completed FROM todos WHERE id = ?\\", id).Scan(&completed)\\n if err == sql.ErrNoRows {\\n http.Error(w, \\"TODO item not found\\", http.StatusNotFound)\\n return\\n} else if err != nil {\\n http.Error(w, \\"Unable to fetch TODO item\\", http.StatusInternalServerError)\\n return\\n}\\n // Update the completed status\\n _, err = db.Exec(\\"UPDATE todos SET completed = ? WHERE id = ?\\", !completed, id)\\n if err != nil {\\n http.Error(w, \\"Unable to update TODO item\\", http.StatusInternalServerError)\\n return\\n}\\n // Fetch the updated TODO item\\n var todo Todo\\n err = db.QueryRow(\\"SELECT id, title, completed FROM todos WHERE id = ?\\", id).Scan(&todo.ID, &todo.Title, &todo.Completed)\\n if err == sql.ErrNoRows {\\n http.Error(w, \\"Updated TODO item not found\\", http.StatusNotFound)\\n return\\n} else if err != nil {\\n http.Error(w, \\"Unable to fetch updated TODO item\\", http.StatusInternalServerError)\\n return\\n}\\n // Render and return the updated TODO item\'s HTML\\n html := renderTodoHTML(todo)\\n w.Header().Set(\\"Content-Type\\", \\"text/html\\")\\n w.Write([]byte(html))\\n}\\n\\n
To make sure your to-do items are persisted, you have to save them to a local database. In this example, I’ll use SQL. Just spin up a new terminal, assuming you have SQL installed on your system, you can create a new database by running:
\\ncreate DATABASE todo_app;\\n\\n
Now create a schema on the database called todo_app
with the described types and keys:
CREATE TABLE todos (\\nid INT AUTO_INCREMENT PRIMARY KEY,\\ntitle VARCHAR(255) NOT NULL,\\ncompleted BOOLEAN DEFAULT FALSE\\n);\\n\\n
Here, the id
is our primary key.
To make sure your database has been created, you can run show databases;
; it will render all your databases as follows:
+--------------------+\\n| Database |\\n+--------------------+\\n| information_schema |\\n| mysql |\\n| performance_schema |\\n| sys |\\n| testdb |\\n| todo_app. | -> this is your database\\n+--------------------+\\n\\n
To check the entries in your app, run SELECT id, title, details FROM todos;
, which will render all the to-do items entries.
Now, with the main.go
and the SQL
logic set in, you can move over to the HTML part and create a file called index.html
. It will be responsible for rendering out and swapping items based on the mutation from the backend logic from the main.go
file:
<!DOCTYPE html>\\n<html lang=\\"en\\">\\n<head>\\n <meta charset=\\"UTF-8\\">\\n <meta name=\\"viewport\\" content=\\"width=device-width, initial-scale=1.0\\">\\n <title>TODO List</title>\\n <script src=\\"https://unpkg.com/htmx.org\\"></script>\\n <style>\\n body {\\n font-family: Arial, sans-serif;\\n margin: 20px;\\n}\\n h1 {\\n color: #333;\\n}\\n .todo-item {\\n border: 1px solid #ddd;\\n padding: 10px;\\n margin-bottom: 10px;\\n border-radius: 5px;\\n}\\n </style>\\n</head>\\n<body>\\n <h1>TODO List</h1>\\n\\n <!-- Form to add a new TODO --\x3e\\n <form id=\\"add-todo-form\\" hx-post=\\"/api/todos\\" hx-swap=\\"beforeend\\" hx-target=\\"#todo-list\\">\\n <input type=\\"text\\" name=\\"title\\" placeholder=\\"Enter a TODO item\\" required>\\n <button type=\\"submit\\">Add</button>\\n </form>\\n <div id=\\"todo-list\\"\\n hx-get=\\"/api/todos\\"\\n hx-trigger=\\"load\\"\\n hx-swap=\\"innerHTML\\">\\n <!-- TODO items will be loaded here dynamically --\x3e\\n </div>\\n</body>\\n</html>\\n\\n
Notice that the CSS is being written in the same file, but you can move to a CSS file of its own based on your preference. You can move the styling part to its own file and import the file in HTML itself. Find the complete code in this GitHub repository.
\\nMake sure you have SQL set up on your system for the application to work correctly. You can see the preview here:
\\nIn a typical htmx and Go setup, you already have an application that is quite fast as it leverages server-side rendering, but you can still use a series of steps on both the frontend and backend as you scale up your application. Below are a few optimization methods I recommend.
\\nBackend optimization ensures smooth API delivery and scales an application’s performance. Go is built with optimal performance and scalability in mind.
\\nGo offers quick database interactions that result in fast performance. It provides both native database drivers and sqlx for simplified querying. As you have seen in this article, you have used native database driver, SQL just by importing a package straight from GitHub. Similarly, you can use sqlx to have reduced boilerplate and more built-in features like struct mapping.
\\nGo offers several approaches to caching to reduce computation and not overburden the database. You can use in-memory caching techniques such as sync.Map
for lightweight, easy-to-use key-value pair-styled caching:
import \\"sync\\"\\nvar cache sync.Map\\n// setting a value in in-memory cache\\nfunc setCache(key, value string) {\\ncache.Store(key, value)\\n}\\n\\n
Similarly, for more advanced use cases, you can use Redis. All you have to do is import the package from GitHub and get started:
\\nimport ( \\"github.com/redis/go-redis/v9\\" \\"context\\" )\\n\\n
It would be unfair to talk about Go and not mention concurrency. Goroutines, when paired with concurrency, can be quite powerful. Goroutines are small lightweight threads that can be run programmatically to manage by Go’s runtime:
\\nfunc doTask(ID int) {\\n fmt.Printf(\\"Processing\\")\\n}\\nfunc main() {\\n for i := 1; i <= 10; i++ {\\n go doTask(i)\\n }\\n}\\n\\n
In this application, you optimized HTML by using htmx, an external third-party library that relies heavily on server-side rendering. htmx not only makes it easier to develop applications with Go but it also minimizes the payload.
\\nIt uses some of the very smart and well-thought-out attributes, such as hx-trigger=\\"revealed\\"
, to lazy load the content. If you are developing a scalable app with server-side rendering, htmx is probably the missing piece if not using any other server-rendering libraries.
As web development trends shift toward optimizing performance and reducing JavaScript overhead, the htmx and Go stack provides an efficient alternative to traditional frontend-heavy frameworks.
\\nWhen building an app with htmx and Go, you can maintain a clear separation between backend logic and UI updates, which can save you a lot of time when working across different teams.
\\nWhile htmx is relatively new and may have a learning curve, developers with a solid Go background will find it a powerful choice for building fast, server-rendered applications.
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nThe scroll-select box is a great tool for frontend developers to improve the user experience of their applications.
\\nNeed a way for users to pick stuff from a list? Just use the normal <select>
dropdown element. You know the little box that expands into a list of choices, doesn’t take much space, and has its built-in keyboard navigation? Yeah, that one!
But now let’s say we have a long list of options (maybe a long list of birth years) to choose from. Sure, there’s keyboard navigation. But let’s be real; who’s going to click through hundreds of options? Most folks (especially me) are just gonna scroll.
\\nSo we are left with one option; to scroll and select. This isn’t bad, but what if we make it interesting by making the option automatically select itself as you scroll through it?
\\nInstead of the traditional “scroll, stop, click”, you’d just scroll until you see what you want, and boom, it’s selected. Simple change, but it makes the whole experience feel more fluid and, honestly, kind of fun.
\\nHere’s an example of what I’m referring to: the scroll-select box, aka scroll-to-select form control:
\\nThat up there is exactly what we’re going to build today: a scrollable date picker that mimics the iOS style, but with the exemption of the <select>
element. In place of that, we will use mostly CSS and JavaScript to build our scroll-to-select form, only because it’s more customizable.
Before diving into the code, let’s understand the key concepts we’ll be using to create our scroll-select box:
\\nThe scroll snap CSS property allows us to create smooth scrolling experiences by defining “snap points” where the viewport will stop after a user finishes scrolling. For my fellow TikTok binge-watchers, that is what happens whenever we scroll past a video. Although it becomes more beautiful when it’s a bit slow, TikTok’s own is quite fast (understandable for its use case).
\\nThe Intersection Observer API is more like an eye that watches which options come into view. Technically, it lets us detect when elements enter or leave the viewport. We’ll use this to determine which option should be selected as the user scrolls.
\\nIts implementations look like this:
\\nconst observer = new IntersectionObserver((entries) => {\\n entries.forEach(entry => {\\n if (entry.isIntersecting) {\\n // Hey, this option is visible, do somehting to it!\\n selectOption(entry.target);\\n }\\n });\\n });\\n
In this article, CSS custom properties will be used to maintain consistent styles throughout the application. In case this is new to you, it’s like a simple design system that has a variable and takes style properties. All we need to do is change the property of the variable, and every other property in our application automatically updates. It’s great for robust styles.
\\nThe rest of our core elements are styles and logic that you are free to customize to your taste. Let’s set up our HTML structure, link style sheet, and script.
\\nWe will use a simple structure where we have each selector (month, day, and year) following the same pattern:
\\n<!DOCTYPE html>\\n<html lang=\\"en\\">\\n<head>\\n <meta charset=\\"UTF-8\\">\\n <meta name=\\"viewport\\" content=\\"width=device-width, initial-scale=1.0\\">\\n <title>Scroll-to-select-form by Logrocket</title>\\n <link rel=\\"stylesheet\\" href=\\"styles.css\\">\\n</head>\\n<body>\\n <h1>Logrocket Scroll To select Date Picker</h1>\\n <div class=\\"date-picker-container\\">\\n <!-- Month Selector --\x3e\\n <div class=\\"custom-select\\" id=\\"monthSelect\\">\\n <div class=\\"select-display\\">Month</div>\\n <div class=\\"options-selector\\"></div>\\n </div>\\n\\n <!-- Day Selector --\x3e\\n <div class=\\"custom-select\\" id=\\"daySelect\\">\\n <div class=\\"select-display\\">Day</div>\\n <div class=\\"options-selector\\"></div>\\n </div>\\n\\n <!-- Year Selector --\x3e\\n <div class=\\"custom-select\\" id=\\"yearSelect\\">\\n <div class=\\"select-display\\">Year</div>\\n <div class=\\"options-selector\\"></div>\\n </div>\\n </div>\\n <div class=\\"selected-date\\" id=\\"selectedDate\\">Select a date</div>\\n\\n <script src=\\"script.js\\"></script>\\n</body>\\n</html>\\n
Rather than using the <select>
elements above, we’re using custom div
s. Later on, this will help us create those scrollable options with JavaScript. The .select-display
shows the current selection, while .options-selector
will contain our scrollable options. We will look at the styling next.
Let’s set up our base styles and declare our CSS variable for our scroll-select box:
\\n/* Root variables with color scheme */\\n:root {\\n --primary-color: #9c27b0; /* Purple */\\n --secondary-color: #e1bee7;\\n --gradient-start: #ba68c8;\\n --gradient-end: #7b1fa2;\\n --container-width: 210px;\\n --item-height: 40px;\\n --spacing: 10px;\\n}\\n\\n/* Reset default styles */\\n* {\\n margin: 0;\\n padding: 0;\\n box-sizing: border-box;\\n}\\n\\n/* Base layout styles */\\nbody {\\n min-height: 100vh;\\n display: flex;\\n flex-direction: column;\\n align-items: center;\\n justify-content: center;\\n gap: 2rem;\\n font-family: system-ui, -apple-system, sans-serif;\\n -webkit-user-select: none;\\n user-select: none;\\n background-color: #fafafa;\\n}\\n\\nh1 {\\n font-size: 1.5rem;\\n color: var(--primary-color);\\n}\\n\\n\\n/* Date picker container */\\n.date-picker-container {\\n display: flex;\\n gap: 1rem;\\n align-items: flex-start;\\n}\\n\\n\\n
The code above sets up the stage for our form. In the body
,we centered our content using the CSS layout system Flexbox, with a light gray background. In our h1
, we styled the header with the purple color defined in our variables above.
In our.date-picker-container
, we created a horizontal layout for the three dropdowns, (month, day, and year).
The -webkit-user-select: none
property gives us the native application feel by preventing text selection during scrolling. If these basic technical words don’t drive the point home, all the code does is pick our color, size everything just right, and ensure it all sits nicely centered on the page.
Going further for the styles, we will want to create those visible buttons for our month/day/year selectors:
\\n/* Custom select styles */\\n.custom-select {\\n position: relative;\\n width: var(--container-width);\\n}\\n\\n/* Selected value display */\\n.select-display {\\n width: 100%;\\n height: var(--item-height);\\n padding: 0 1rem;\\n background: linear-gradient(to right, var(--gradient-start), var(--gradient-end));\\n color: white;\\n border-radius: 6px;\\n display: flex;\\n align-items: center;\\n justify-content: space-between;\\n cursor: pointer;\\n font-size: 1.25rem;\\n box-shadow: 0 2px 5px rgba(156, 39, 176, 0.2);\\n}\\n
In the code above we have attached a relative position to .custom-select
. This is important because it helps position the dropdown menu that appears below when clicked.
When a user sees “January”, “15th”, or “2025”, the .select-display
handles the styling. The buttons have a purple gradient, white text, and a small shadow that makes them appear to “float”.
Whenever our form is open – i.e. a user clicks either date, year, or month – we want to attach a downward arrow (▼) to each of these buttons, making it rotate 180°.
\\nBut why is this here, you may ask? Whenever the month/day/year is clicked as the options are displayed, the arrow toggles 180° — indicating either an open or closed state:
\\n.select-display::after {\\n content: \'▼\';\\n font-size: 0.8em;\\n transition: transform 0.3s ease;\\n}\\n\\n.custom-select.open .select-display::after {\\n transform: rotate(180deg);\\n}\\n
In the code above the transition
makes the rotation smooth rather than instant. One may ask: how will CSS make this rotation interactive? In reality, CSS wouldn’t do that; JavaScript would. For the sake of better understanding, we want to finish everything concerning CSS before we go to Javascript.
We will go ahead and style our dropdown container below:
\\n/* Options dropdown */\\n.options-selector {\\n position: absolute;\\n top: calc(var(--item-height) + var(--spacing));\\n width: 100%;\\n height: calc(var(--item-height) * 7 + var(--spacing) * 6);\\n overflow-y: auto;\\n scroll-snap-type: y mandatory;\\n overscroll-behavior-y: none;\\n border-radius: 8px;\\n box-shadow: 0 4px 12px rgba(0, 0, 0, 0.1);\\n padding: var(--spacing);\\n background: white;\\n \\n /* Hide scrollbar for different browsers */\\n &::-webkit-scrollbar {\\n display: none;\\n }\\n -ms-overflow-style: none;\\n scrollbar-width: none;\\n \\n /* Animation states */\\n opacity: 0;\\n visibility: hidden;\\n transform: translateY(-10px);\\n transition: all 0.3s ease;\\n z-index: 100;\\n}\\n\\n.custom-select.open .options-selector {\\n opacity: 1;\\n visibility: visible;\\n transform: translateY(0);\\n}\\n
This is where it gets more entertaining. We’re using position: absolute
so our dropdown floats over other content. The height calculation is meant to show exactly seven items at once. I’ve found this to be my sweet spot for usability.
As mentioned earlier, scroll-snap-type: y mandatory
is used to create that satisfying snap effect you feel when scrolling through options on your phone.
The overscroll-behavior-y: none
is just good manners; it stops the whole page from scrolling when you get to the end of our options.
We will want that smooth fade of the dropdown. That’s where the animation comes in. The opacity
, visibility
, transform
is responsible for the slick fade in/out when we toggle our dropdown.
For the individual options, we want them to look clickable and respond with styles when selected:
\\n/* Option items */\\n.option-item {\\n display: flex;\\n align-items: center;\\n justify-content: center;\\n height: var(--item-height);\\n margin-bottom: var(--spacing);\\n background: linear-gradient(to right, var(--gradient-start), var(--gradient-end));\\n border-radius: 6px;\\n color: white;\\n font-size: 1.25rem;\\n scroll-snap-align: start; //explained below\\n transition: background-color 0.3s ease;\\n cursor: pointer;\\n}\\n\\n.option-item:last-child {\\n margin-bottom: calc(var(--item-height) * 6);\\n}\\n\\n
In the code above, we gave each option a nice gradient background and transitions. The flexbox keeps everything perfectly aligned. This is usually a good practice for any type of code.
\\nThe transition property is what gives us that smooth color change when you scroll or select an option and for the .option-item:last-child
. This property adds an extra space after the last option in the dropdown.
When an option is snapped into place in our demo, it changes color and scale by around 7%. Let’s fix that below with a few styles:
\\n.option-item.selected {\\n background: var(--primary-color);\\n \\n transform: scale(1.08);\\n transition: all 0.3s ease;\\n}\\n
Below, I singled out the scroll-snap-align
property because of its importance:
scroll-snap-align: start;\\n
The scroll-snap-align
property tells the browser where each option should snap when scrolling. Setting it to start
means each option will align with the top of our container, creating that precise scrolling effect. Without this, our scroll-snap-type: y mandatory
wouldn’t know where to snap to. They work together as a team to create that scrolling experience.
For our .selected-date
font, we want to simply add a little margin on top, give it our primary color, and generally make it look nice:
/* Selected date display */\\n.selected-date {\\n margin-top: 2rem;\\n color: var(--primary-color);\\n font-size: 1.2rem;\\n font-weight: 500;\\n}\\n
This is what our application looks like:
\\nIt is not very interactive now, because we have not yet introduced JavaScript. That’s it for styling; let’s jump into the really fun part of the scroll-select box project.
\\nIn this section, we will make our application interactive with JavaScript. For a start, we will need a list of months, and our year data. Let’s say for this example, we will also want the user not to be younger than 18, or older than 34.
\\nThis is just a personal choice depicting a real implementation. Let’s get that done with the code below:
\\nconst months = [\\n \'January\', \'February\', \'March\', \'April\', \'May\', \'June\',\\n \'July\', \'August\', \'September\', \'October\', \'November\', \'December\'\\n];\\nconst startYear = 1990;\\nconst endYear = 2007;\\n
In the code above we have been able to set foundational data for the date picker application. The months
array contains all 12 months.
The startYear
and endYear
define a range from which users can select their age range. We will use these constants to populate the dropdown options and validate date selections.
When a user clicks on any option (like “March” or “2014”), we want to update the state of the .select-display
to the selected options:
function selectOption(option, container) {\\n const display = container.querySelector(\'.select-display\');\\n display.textContent = option.textContent;\\n document.querySelectorAll(`#${container.id} .option-item`).forEach(opt =>\\n opt.classList.remove(\'selected\'));\\n option.classList.add(\'selected\');\\n updateSelectedDate();\\n}\\n
In the code above after updating the .select-display
, the .selected
class now gets added to your choice, which triggers that scale animation.
Amidst all this, the code is cleaned up by removing the selected
class from any previously picked options. At the end of the function, it calls a new function updateSelectedDate()
.
This function will be created below. All it does is update the selected-date
at the bottom of your screen as seen in the demo above.
The createOptions
function is used to build all the option boxes you see:
function createOptions(container, items, type) {\\n const selector = container.querySelector(\'.options-selector\');\\n const display = container.querySelector(\'.select-display\');\\n \\n items.forEach(item => {\\n const option = document.createElement(\'div\');\\n option.className = \'option-item\';\\n option.textContent = item;\\n \\n option.addEventListener(\'click\', () => {\\n selectOption(option, container);\\n container.classList.remove(\'open\');\\n });\\n \\n selector.appendChild(option);\\n });\\n}\\n
The function above takes three parameters which are container, items, and type.
These parameters define:
\\ncontainer
– Which dropdown to populate (monthSelect
, daySelect
, yearSelect
)
items
– The values to display (months, days 1-31, years 1990-2025)
type
– Dropdown identifier (month, day, year)
The function takes these data and transforms them into clickable options inside each dropdown. It creates a new div with the option-item
class, which was earlier styled in the CSS above. It also sets up a basic click handler that selects the option and closes the dropdown.
This function works with initializeSelectors()
below, which, when called, needs the date displayed. So let’s go ahead and create the initializeSelectors()
function:
function initializeSelectors() {\\n createOptions(monthSelect, months, \'month\');\\n createOptions(daySelect, Array.from({length: 31}, (_, i) => i + 1), \'day\');\\n createOptions(yearSelect, Array.from({length: endYear - startYear + 1}, (_, i) => startYear + i), \'year\');\\n}\\n
The code above creates the month options using our months
array. It then generates the day options from one to 31.
There are many ways to increment a number, but I found this array trick enticing. It simply creates an array with 31 empty slots. The second argument takes each index which is i
(starting at 0) and increments it by one. It also does the same for the years. Interesting right?
At the end of the createOptions()
and the InitializeSelector()
transforms this:
<div class=\\"options-selector\\"></div>\\n
…into a scrollable list of options that inherit our CSS styles with the snap-scrolling behavior.
\\nUp to this point, we have been able to create and handle our options. Now let’s take it a step further by handling the dropdown.
\\nThe setupDropdownHandlers()
function toggles the clicked dropdowns using the open
class:
function setupDropdownHandlers() {\\n document.querySelectorAll(\'.custom-select\').forEach(select => {\\n const display = select.querySelector(\'.select-display\');\\n \\n display.addEventListener(\'click\', (e) => {\\n e.stopPropagation();\\n // Close all other dropdowns\\n document.querySelectorAll(\'.custom-select\').forEach(s => {\\n if (s !== select) s.classList.remove(\'open\');\\n });\\n select.classList.toggle(\'open\');\\n });\\n });\\n}\\n
This connects to your CSS where .custom-select.open
triggers the dropdown’s visibility through:
.custom-select.open .options-selector {\\n opacity: 1;\\n visibility: visible;\\n transform: translateY(0);\\n}\\n
It also prevents the click from affecting other elements (stopPropagation
). For a better user experience, it closes any other open dropdowns by removing their open
class.
Now when we click either month, year, or day, we have a scrollable dropdown, where we can select from our options:
\\nIf you noticed when we clicked outside, the dropdown didn’t close. That’s important for the user experience. We’ll handle that right away:
\\n// Close dropdowns when clicking outside\\nfunction clickHandler() {\\n document.addEventListener(\'click\', () => {\\n document.querySelectorAll(\'.custom-select\').forEach(select =>\\n select.classList.remove(\'open\'));\\n });\\n\\n // Prevent closing when clicking inside dropdown\\n document.querySelectorAll(\'.options-selector\').forEach(selector => {\\n selector.addEventListener(\'click\', (e) => e.stopPropagation());\\n });\\n}\\n
The clickHandler()
enables the dropdown closure when clicked outside. I made this function smart enough to keep dropdowns open when clicking inside them:
You’ll also notice that the selected date below is not updated when we select a date:
\\nLet’s quickly fix that, so it updates synonymously whenever the selectOption()
function is called. We’ll create the function we called in the selectOption()
above:
function updateSelectedDate() {\\n const month = monthSelect.querySelector(\'.select-display\').textContent;\\n const day = daySelect.querySelector(\'.select-display\').textContent;\\n const year = yearSelect.querySelector(\'.select-display\').textContent;\\n \\n if (month !== \'Month\' && day !== \'Day\' && year !== \'Year\') {\\n selectedDate.textContent = `Selected: ${month} ${day}, ${year}`;\\n }\\n}\\n
Now we see that the date updates after we make our selection. Also, you’ll notice that when you scroll, nothing happens. You have to click an option to select a date, which is no different from a regular selector form. To fix that, we will be using the Intersection Observer API.
\\nWe will create a function called setupIntersectionObservers()
. This function is where we write our most important scroll-to-select form feature logic:
function setupIntersectionObservers() {\\n document.querySelectorAll(\'.options-selector\').forEach(selector => {\\n const container = selector.closest(\'.custom-select\');\\n const observer = new IntersectionObserver(\\n (entries) => {\\n entries.forEach(entry => {\\n if (entry.isIntersecting) {\\n const option = entry.target;\\n selectOption(option, container);\\n }\\n });\\n },\\n {\\n root: selector,\\n rootMargin: \'-5% 0px -94% 0px\',\\n threshold: 0\\n }\\n );\\n\\n selector.querySelectorAll(\'.option-item\').forEach(option =>\\n observer.observe(option));\\n });\\n}\\n
In the code above we have used the Intersection Observer API, which helps us keep an eye on each element as they enter and exit a defined viewport area.
\\nIn our code, we created an observer for each dropdown’s options in the container. This observer is configured with specific margins (rootMargin: \'-5% 0px -94% 0px\'
)that create a detection area at the top of the dropdown.
When an option scrolls into this area, isIntersecting
becomes true, triggering the selection of that option. This creates the snap effect as you scroll through options.
Each option element (day, month, or year) gets observed individually through observer.observe(option)
. When an option enters the area, selectOption()
function (which was declared above) is called to update the display and maintain the selected state.
The observer
continuously monitors the scroll position, making selections feel smooth and natural as users scroll through the date options.
This is tied directly to the CSS scroll-snap behavior defined earlier; they work together to create a polished scrolling experience. The negative margins in rootMargin
ensure only one option can be “intersecting” at a time, preventing multiple simultaneous selections.
Here’s what our scroll-select box looks like now:
\\nThis has been a long read, but trust me when I say I have tried to make this as short as possible. Even if it was a little bit hectic for you, at least you’ve added to your previous knowledge of operating with selects. Now that you have these extra insights, you’re ready to implement a scroll-select box in your project.
\\nFor additional reading, check out our posts on JavaScript scroll-snap events and creating custom <select> dropdown with CSS.
\\nA big thank you for hanging on this far; here is the codepen for this article. Keep coding!
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nIf you’re looking for an alternative to your current deployment service, Deno Deploy could be the platform for you.
\\nFor those just getting started with deploying their first application, Deno Deploy’s simplicity might be exactly what you need; no complex configuration files to wrestle with or cloud concepts to master before getting your app live.
\\nThe Deno Deploy team recently announced its new tolerance for server-side-rendered Next.js applications. In this article, we will provide a guide to deploying your server-side-rendered Next.js application using Deno Deploy.
\\nTo follow along with this article, you will want to ensure that you have the following:
\\nWe will briefly discuss the reasons for choosing Deno Deploy and touch on some of its current limitations. We’ll also see how it measures up with similar tools like Vercel, one of the other top options in this space.
\\nNext offers both static site-generated app pages and server-side rendered applications. But up until late 2024, Deno Deploy only supported Next static-generated sites.
\\nWith the latest updates, users can now easily deploy SSR Next apps in less than seven minutes. (Note: This is from my observation, and could vary from user to user).
\\nKey benefits of Deno Deploy include:
\\nYou can easily deploy through repository linking or the command line. This will be elaborated on later in the deployment section.
\\nnpm
compatibilityDeno recognizes package.json files, which means you can bring your existing Node.js projects without any major changes.
\\nDeno Deploy is a “globally distributed serverless JavaScript platform.” For example: this means when someone in Tokyo visits your app, it loads from a nearby server, making it faster for everyone, wherever they are.
\\nAfter deployment, you can easily integrate services such as database storage options, ORM support (Drizzle, Prisma), queue management for async tasks, and cron job support.
\\nTo spin up a simple Next SSR application, we must know the kind of applications that are referred to as “server-side rendered”.
\\nAccording to the docs, Next is server-side-rendered by default, unless explicitly tagged client-side using the \\"use client\\"
directive.
You should note that the SSR benefits are best seen when used in content-heavy applications, such as:
\\nMost of these applications will not need much interaction. Sites like these benefit from SSR because they require fast initial page loads to help with SEO and user experience.
\\nHaving this in mind, we will spin a basic SSR Next.js application, that fetches a random Chuck Norris joke from an API and displays it. To do that we will install Next 15 using the command below:
\\nnpx create-next-app@latest\\n
Now create a page directory. Inside this directory, create an index.tsx
file. Your component tree should look like this:
Next SSR APP/\\n├── pages/\\n│ ├── index.js\\n│ └── _app.tsx\\n├── public/ \\n└── styles/\\n\\n
Inside the index.tsx
file, paste the code below:
// pages/index.tsx\\n\\n\\nexport async function getServerSideProps() {\\n const res = await fetch(\'https://api.chucknorris.io/jokes/random\');\\n const data = await res.json();\\n\\n return {\\n props: {\\n joke: data.value,\\n },\\n };\\n}\\n\\nexport default function Page({ joke }: { joke: string }) {\\n return (\\n <div className=\\"min-h-screen bg-gray-100 flex items-center justify-center p-4\\">\\n <div className=\\"max-w-2xl w-full bg-white rounded-lg shadow-lg p-8\\">\\n <p className=\\"text-xl text-gray-800 text-center font-medium leading-relaxed\\">\\n {joke}\\n </p>\\n </div>\\n </div>\\n );\\n}\\n
The code above fetches a random joke from the Chuck Norris API and returns a prop accessible on our page. For our styles, Tailwind CSS, and lucid-react
were used in the _app.tsx
component like this:
import type { AppProps } from \'next/app\'\\nimport { Laugh } from \'lucide-react\'\\nimport Head from \'next/head\'\\nimport \'../styles/globals.css\'\\n\\nexport default function App({ Component, pageProps }: AppProps) {\\n return (\\n <>\\n <Head>\\n <title>Logrocket Powered Jokes</title>\\n <meta name=\\"description\\" content=\\"Random Chuck Norris jokes served with SSR\\" />\\n </Head>\\n <div>\\n <nav className=\\"bg-white shadow-sm\\">\\n <div className=\\"max-w-7xl mx-auto px-4 sm:px-6 lg:px-8\\">\\n <div className=\\"flex justify-between h-16 items-center\\">\\n <div className=\\"flex items-center\\">\\n <Laugh className=\\"text-blue-500\\" size={32} />\\n <span className=\\"ml-2 text-xl font-bold\\">Chuck Jokes</span>\\n </div>\\n </div>\\n </div>\\n </nav>\\n <Component {...pageProps} />\\n </div>\\n </>\\n )\\n}\\n
If you are a fan of very straightforward steps, Deno Deploy has you covered. Within Deno Deploy, you can easily deploy your application by simply connecting a GitHub repo. But don’t worry if you’re like me (and many other developers) who prefer using our command line interface.
\\nIn this section, we will cover both steps. But prior to that, you will need to navigate to your next.config.ts file
in your application and configure output: \\"standalone
. Your next.config.ts
should look like this:
import type { NextConfig } from \\"next\\";\\n\\nconst nextConfig: NextConfig = {\\n output: \\"standalone\\",\\n \\n};\\n\\nexport default nextConfig;\\n
According to the Deno Deploy docs, when we deploy an SSR Next.js app without specifying an output type in next.config.js
, as we just did above, the platform automatically uses a standardized package (jsr:@deno/nextjs-start)
to run your application with configurable environment variables.
After the next.js config
we will push our application to GitHub. Now, we can connect our GitHub repository and deploy our SSR application.
At this point, if you do not have a Deno Deploy account, you will be prompted to sign up with your GitHub account. This is what your dashboard should look like after signing up:
\\nYou could ignore the welcome if that pisses you off and pay attention to the two sections below; deploying an existing project or learning about Deploy.
\\nBy selecting I have an existing project, you will be routed to the page below, where you will give Deno Deploy access, to either all your repositories, which will be a great idea if you only want to be using Deno Deploy going forward:
\\nI typically opt for the second option, which is to only give Deno Deploy access to the specific repository I will want to deploy:
\\nWe will go ahead to click the Deploy Project button:
\\nBy clicking Deploy Project, your Next.js project will automatically be detected. Deno Deploy will prepare the necessary build configuration, one of which will be the deploy.yml
action file. Afterward, the changes will be committed to your existing repo.
The new deploy.yml
will then be responsible for building and deploying your project on every recent push.
Feel free to customize the action to your taste by editing the .github/workflows/deploy.yml
file. At this point the project is live:
This is a simpler way to deploy an application with Deno; you’re done in fewer than five commands. We will use deployct
. If you don’t have it installed yet, use the command below:
deno install -gArf jsr:@deno/deployctl\\n
According to the Deno Deploy team, this command line interface comes with its own benefits. These include managing the entire deployment lifecycle of your Next projects from the start, watching live updates in real-time, switching back to older versions for production, and more.
\\nIt also lets you work with Deno Deploy from automated systems like continuous integration platforms. Now in the root of your Next.js project, run:
\\ndeno task build\\n
To build your application. Deno will automatically recognize your package.json
, then finally run the command to deploy your application:
deployctl deploy --include=.next --include=public jsr:@deno/nextjs-start/v15\\n
In the terminal, you should see this success log below:
\\n√ Found 101 assets.\\n√ Uploaded 102 new assets.\\n√ Production deployment complete.\\n√ Created config file \'deno.json\'.\\n\\nView at:\\n - https://emmanuelo22-deno-deploy-53-yy9z56s1z2de.deno.dev/\\n - https://emmanuelo22-deno-deploy-53.deno.dev/\\n
Our application looks like this:
\\nYou can now view the deployed version.
\\nVercel is currently one of the most popular platforms for deploying your Next.js applications. If you’re building with Next.js, there’s a good chance you’re already using Vercel — or you’ve at least heard of it.
\\nVercel was created by the same team behind Next.js and offers a seamless deployment experience specifically optimized for Next.js apps.
\\nWe’re comparing these two platforms because while Vercel is the go-to choice for Next.js developers, Deno Deploy is emerging as a good alternative with its simpler approach and cost-effective pricing. Below is a detailed comparison of both platforms:
\\nFeature | \\nDeno Deploy | \\nVercel | \\n
Primary focus | \\nDeploying JavaScript, TypeScript, and WebAssembly server-side applications | \\nBuilding and deploying web applications, supporting various frontend frameworks | \\n
Ease of setup | \\nSupports direct deployments from repositories | \\nStraightforward GitHub, GitLab, and Bitbucket integrations | \\n
Supported languages | \\nJavaScript, TypeScript, WebAssembly | \\nJavaScript, TypeScript, Python, Go, Ruby, PHP, etc | \\n
Serverless functions | \\nBuilt-in runtime for JavaScript and TypeScript | \\nSupports serverless functions in various languages | \\n
Edge functionality | \\nEdge-first, optimized for global low-latency applications | \\nEdge middleware for custom logic | \\n
Pricing | \\nUsage-based, with a free tier available | \\nUsage-based, with a free tier available | \\n
Community support | \\nAn active growing community, with official documentation and community discussions | \\nA very large developer community and great documentation | \\n
CI/CD | \\nContinuous deployment via GitHub | \\nIntegrated CI/CD pipelines for GitHub, GitLab, and Bitbucket | \\n
The detailed comparison above helps highlight the strengths and focus areas of both platforms. One thing worth mentioning is that deploying SSR Next.js apps with Deno Deploy is relatively new.
\\nIt’s possible you’ll run into server errors, which I’ve experienced when deploying the latest Next.js version. However, this issue is likely a temporary bug and should not deter you from exploring, You can also submit an issue on GitHub.
\\nThis article provided a detailed, step-by-step guide to deploying your first Next.js SSR application on Deno Deploy. I hope you had a successful deployment!
\\nFor further reading check out our Deno adoption guide and this post on building a server-rendered Next app with Next.js and Express.
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nEditor’s note: This post was last updated on 5 February 2025 to include new information about EAS Workflows.
\\nContinuous integration (CI) and continuous delivery (CD) pipelines are a set of automated processes that help developer teams deliver software more quickly and reliably. CI pipelines automate the process of building, testing, and deploying code, while CD pipelines automate the process of delivering that code to the end user.
\\nIn this article, we will discuss the most popular CI/CD pipelines used by React Native developers.
\\nAs developers, we know that one of the most crucial steps while programming is to build, deploy and test our code. To make this happen, we can do the following:
\\nproduction
Even though this solution works, there are a few issues with this:
\\nThis is the problem a CI/CD pipeline comes in to solve. Let’s cover a few different options for React Native development.
\\nEAS Workflows is a dedicated React Native CI/CD solution for iOS, Android, and Web. With this service, developers can automate their development and release processes, which makes the whole development experience smoother and more efficient. EAS Workflows is the easiest service on this list to work with because it only requires a few commands to get up and running:
\\nFor example, to build and publish, all we need to do is the following:
\\nnpm install --global eas-cli\\nnpx eas-login # sign into the service\\neas build:configure # this will generate an eas.json file\\n# this file will allow you to configure your building process.\\n\\n
For more information on deployment and build configuration, navigate to Expo’s documentation.
\\nHere are some of the things I like about EAS:
\\nHere are the cons:
\\nMicrosoft App Center (MAC) is a CI/CD platform dedicated to app development. Unlike Expo Application Services, it supports both React Native and other cross-platform technologies like Unity and Xamarin.
\\nHere’s what it looks like:
\\nBuilding an app on App Center is a bit involved, but it is still straightforward. First, upload your project on GitHub, Azure, or another code hosting platform:
\\nNext, specify the project’s location in App Center’s settings:
\\nFinally, the service will then ask you to configure your build settings. You also might have to generate and upload a keystore file in this step.
\\nThat’s it! When that’s done, the pipeline will start building the app for you.
\\nMicrosoft App Center has some significant strengths that stand out among the competition.
\\nIt can run a test to verify whether the app can launch successfully. It is particularly useful in situations where you need to ensure the functionality of a new build.
\\nWith this feature, you can verify this without manually running the app on a mobile device:
\\nHere are a few other pros worth mentioning:
\\nHowever, there were some things that I found unappealing:
\\nGitHub Actions is a prominent option among numerous open-source programmers. One reason for its popularity is that this tool integrates with GitHub, so developers can use it to automate their workflows directly from their GitHub repository:
\\nAlthough building an app on GitHub Actions is tricky, it provides greater control over the building process as compared to other platforms, thus making it a worthwhile trade-off.
\\nTo deploy with GitHub Actions, create a folder in your repo called .github/workflows
. There, create a new file called ci.yml
:
This tells GitHub that our project will use GitHub Actions for deployment. After this step, follow the instructions in this LogRocket article to build a CI/CD pipeline using GitHub Actions in React Native.
\\nHere are some of the reasons why this pipeline service might be suitable for you:
\\nHowever, as compared to other platforms, here are some things that I didn’t like:
\\nCodeMagic is another CI/CD pipeline specifically geared towards mobile app development frameworks, including Flutter, Cordova, Ionic, and others:
\\nJust like Expo and Microsoft’s App Center, deploying and building your React Native app is fairly easy. To get started, create a file called codemagic.yml
in your React Native app, and write the following code:
workflows:\\n sample-workflow:\\n name: Codemagic Sample Workflow\\n max_build_duration: 120\\n instance_type: mac_mini_m1\\n\\n
This tells the pipeline that our build will use Apple’s M1 Mac machine for deployment.
\\nAfter this step, it’s best to head to CodeMagic’s documentation to learn how to build and deploy your project.
\\nHere are some of the things I loved about it:
\\nHowever, there were some things that I didn’t like about it:
\\nProfessional mobile developers also widely use other CI services, such as Bitrise and Jenkins CI. Since these services have compilation steps similar to Microsoft App Center, we won’t discuss their building processes here.
\\nJust like CodeMagic, this service is geared towards mobile app development. Furthermore, it supports add-ons to help in development with, for example, debug reports or release management.
\\nHere are some aspects of Bitrise that might make it a appealing option:
\\nHowever, there were some flaws that might be a deal-breaker to some:
\\nJenkins is another pipeline service that is targeted towards enterprises and large businesses. This is because the software is completely self-hosted. As discussed before, this is great for situations where the project’s source code has to be kept private.
\\nOne major factor that made me like Jenkins was that it is completely self-hosted. Because of this, Jenkins is a popular tool among larger businesses. This is because self hosting allows companies to avoid spending money on expensive tiers and instead use local hardware.
\\nHowever, this comes at a cost: maintaining and making sure your Jenkins host remains secure might be a hassle for some teams.
\\nHere is a small table that summarizes all pros and cons of all platforms discussed in this article:
\\nTool | \\nKey Features | \\nPros | \\nCons | \\nPricing | \\n
---|---|---|---|---|
Expo Application Services | \\nBuilt for React Native and Expo projects | \\n\\n
| \\n\\n
| \\nFree tier & Paid tiers | \\n
Microsoft App Center | \\nSupports multiple platforms, including React Native | \\n\\n
| \\n\\n
| \\nFree tier & Paid tiers | \\n
GitHub Actions | \\nIntegrates directly with GitHub repository | \\n\\n
| \\n\\n
| \\nFree tier & Paid tiers | \\n
CodeMagic | \\nSpecifically geared towards mobile app frameworks | \\n\\n
| \\nNo self-hosted option | \\nPay-as-you-go | \\n
Bitrise | \\nBuilt for mobile apps, Uses Apple’s M-series machines | \\n\\n
| \\nNo self-hosted option | \\nPay-as-you-go | \\n
Jenkins CI | \\nCompletely self-hosted, no online hosting option available | \\n\\n
| \\nMaintaining and hosting of the CI server on local infrastructure can be a pain | \\nFree | \\n
In this article, we briefly discussed some popular CI/CD platforms for React Native and why they are crucial in the programming world. We also included some honorable mentions, Jenkins CI and Bitrise, in our comparison table. It is important to remember that every project is different, and therefore it is important to evaluate each tool’s advantages and disadvantages.
\\nIn my projects, I typically use Expo Services because it is incredibly easy to set up and use, and its free tier is more than enough for my needs. Thank you so much for reading!
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nreduce
works\\n Object.groupBy
\\n Object.groupBy
\\n Map.groupBy
\\n Object.groupBy
vs. Map.groupBy
vs. reduce
\\n Sorting a list by a shared category is a common task in JavaScript, often solved using Array.prototype.reduce
. While powerful, reduce
is a bit cumbersome and heavyweight for this kind of job. For years, the use of this functional programming approach was a common pattern for converting data into a grouped structure.
Enter Object.groupBy
, a new utility that has gained cross-browser compatibility as of the end of 2024.
Designed to simplify the grouping process of data structures, Object.groupBy
offers a more intuitive and readable way to group and sort lists by a shared category. In this article, we’ll compare the functional approach of reducing with the new grouping method, explore how they differ in implementation, and provide insights into performance considerations when working with these tools.
reduce
worksThe reduce
method is a powerful utility for processing arrays. The term “reducer” originates from functional programming. It’s a widely used synonym for “fold” or “accumulate.”
In such a paradigm, reduce
represents a higher-order function that transforms a data structure (like an array) into a single aggregated value. It reduces, so to speak, a collection of values into one value by repeatedly applying a combining operation, such as summing numbers or merging objects.
The signature looks like this:
\\nreduce<T, U>(\\n callbackFn: (accumulator: U, currentValue: T, currentIndex: number, array: T[]) => U,\\n initialValue: U\\n): U;\\n// signature of callback function\\ncallbackFn: (accumulator: U, currentValue: T, currentIndex: number, array: T[]) => U\\n\\n
Let’s break down the different parts:
\\naccumulator
: The aggregated result from the previous callback execution or the initialValue
for the first execution. It has the type of the initial value (type U
)currentValue
: The current element of the array being processed (type T
)currentIndex
: The index of the currentValue
in the array (type number)array
: The array on which reduce was called (type T[]
)initialValue
: This sets the initial value of the accumulator (type U
) if provided. Otherwise, the value is set to the first array itemAfter all array items are processed, the method returns a single value, i.e., the accumulated result of type U
.
Let’s pretend we have an array of order objects:
\\nconst orders = [\\n { category: \'electronics\', title: \'Smartphone\', amount: 100 },\\n { category: \'electronics\', title: \'Laptop\', amount: 200 },\\n { category: \'clothing\', title: \'T-shirt\', amount: 50 },\\n { category: \'clothing\', title: \'Jacket\', amount: 100 },\\n { category: \'groceries\', title: \'Apples\', amount: 10 },\\n // ...\\n];\\n\\n
We want to group the order list into categories like this:
\\n{\\n electronics: [\\n {\\n category: \\"electronics\\",\\n title: \\"Smartphone\\",\\n amount: 100\\n },\\n {\\n category: \\"electronics\\",\\n title: \\"Laptop\\",\\n amount: 200\\n }\\n ],\\n clothing: [\\n {\\n category: \\"clothing\\",\\n title: \\"T-shirt\\",\\n amount: 50\\n },\\n {\\n category: \\"clothing\\",\\n title: \\"Jacket\\",\\n amount: 100\\n }\\n ],\\n // ...\\n}\\n\\n
The next snippet shows a possible implementation:
\\nconst groupedByCategory = orders.reduce((acc, order) => {\\n const { category } = order;\\n // Check if the category key exists in the accumulator object\\n if (!acc[category]) {\\n // If not, initialize it with an empty array\\n acc[category] = [];\\n }\\n // Push the order into the appropriate category array\\n acc[category].push(order);\\n return acc;\\n}, {});\\n\\n
Object.groupBy
Let’s compare the previous code to the following implementation with the new Object.groupBy
static method:
const ordersByCategory = Object.groupBy(orders, order => order.category);\\n\\n
This solution is straightforward to understand. The callback function in Object.groupBy
must return a key for each element (order
) in the passed array (orders
).
In this example, the callback returns the category value since we want to group all orders by all unique categories. The created data structure looks exactly like the result of the reduce
function.
To demonstrate that the callback can return any string, let’s organize products into price ranges:
\\nconst products = [\\n { name: \'Wireless Mouse\', price: 25 },\\n { name: \'Bluetooth Headphones\', price: 75 },\\n { name: \'Smartphone\', price: 699 },\\n { name: \'4K Monitor\', price: 300 },\\n { name: \'Gaming Chair\', price: 150 },\\n { name: \'Mechanical Keyboard\', price: 45 },\\n { name: \'USB-C Cable\', price: 10 },\\n { name: \'External SSD\', price: 120 }\\n ];\\n\\nconst productsByBudget = Object.groupBy(products, product => {\\n if (product.price < 50) return \'budget\';\\n if (product.price < 200) return \'mid-range\';\\n return \'premium\';\\n});\\n\\n
The value of productsByBudget
looks like this:
{\\n budget: [\\n {\\n \\"name\\": \\"Wireless Mouse\\",\\n \\"price\\": 25\\n },\\n {\\n \\"name\\": \\"Mechanical Keyboard\\",\\n \\"price\\": 45\\n },\\n {\\n \\"name\\": \\"USB-C Cable\\",\\n \\"price\\": 10\\n }\\n ],\\n \\"mid-range\\": [\\n {\\n \\"name\\": \\"Bluetooth Headphones\\",\\n \\"price\\": 75\\n },\\n {\\n \\"name\\": \\"Gaming Chair\\",\\n \\"price\\": 150\\n },\\n {\\n \\"name\\": \\"External SSD\\",\\n \\"price\\": 120\\n }\\n ],\\n premium: [\\n {\\n \\"name\\": \\"Smartphone\\",\\n \\"price\\": 699\\n },\\n {\\n \\"name\\": \\"4K Monitor\\",\\n \\"price\\": 300\\n }\\n ]\\n}\\n\\n
Let’s consider the following example:
\\nconst numbers = [1, 2, 3, 4];\\nconst isGreaterTwo = Object.groupBy(numbers, x => x > 2);\\n\\n
The value of isGreaterTwo
looks like this:
{\\n \\"false\\": [1, 2],\\n \\"true\\": [3, 4]\\n}\\n\\n
This demonstrates how Object.groupBy
automatically casts non-string return values into string keys when creating group categories. In this case, the callback function checks whether each number is greater than two, returning a Boolean. These Booleans are then transformed into the string keys \\"true\\"
and \\"false\\"
in the resulting object.
N.B., remember that automatic type conversion is usually not a good practice.
\\nObject.groupBy
Object.groupBy
excels at simplifying basic grouping operations, but has limitations. It directly places the exact array items into the resulting groups, preserving their original structure.
However, if you want to transform the array items while grouping, you’ll need to perform an additional transformation step after the Object.groupBy
operation. Here is a possible implementation to group orders but remove superfluous category properties:
const cleanedGroupedOrders = Object.fromEntries(\\n Object.entries(Object.groupBy(myOrders, order => order.category))\\n .map(([key, value]) => [key, value.map(groupedValue => (\\n { \\n title: groupedValue.title, \\n amount: groupedValue.amount \\n }\\n ))])\\n);\\n\\n
Alternatively, you can utilize reduce
with its greater flexibility for transforming data structures:
const cleanedGroupedOrders = orders.reduce((acc, order) => {\\n const { category } = order;\\n if (!acc[category]) {\\n acc[category] = [];\\n }\\n acc[category].push({ title: order.title, amount: order.amount });\\n return acc;\\n}, {});\\n\\n
Map.groupBy
If you need to group objects with the ability to mutate them afterwards, then Map.groupBy
is most likely the better solution:
const groupedOrdersMap = Map.groupBy(orders, order => order.category);\\n\\n
As you see, the API is the same:
\\nIf you group array elements of primitive data types or read-only objects, then stick with Object.groupBy
, which performs better.
Object.groupBy
vs. Map.groupBy
vs. reduce
To compare the performance of these different grouping methods, I conducted a large list of order objects to perform various grouping algorithms:
\\nconst categories = [\'electronics\', \'clothing\', \'groceries\', \'books\', \'furniture\'];\\nconst orders = [];\\n// Generate 15 million orders with random categories and amounts\\nfor (let i = 0; i < 15000000; i++) {\\n const category = categories[Math.floor(Math.random() * categories.length)];\\n // Random amount between 1 and 500\\n const amount = Math.floor(Math.random() * 500) + 1; \\n orders.push({ category, amount });\\n}\\n\\n
With that test data in place, I leverage performance.now
to measure the runtimes. I run the different variants 25 times on different browsers and calculate the mean value of the running times for each variant:
const runtimeData = {\\n \'Array.reduce\': [],\\n \'Object.groupBy\': [],\\n \'Map.groupBy\': [],\\n \'reduce with transformation\': [],\\n \'Object.groupBy + transformation\': []\\n};\\nfor (let i = 0; i < 25; i++) {\\n console.log(`Run ${i + 1}`);\\n measureRuntime();\\n}\\n// Log average runtimes\\nconsole.log(\'Average runtimes:\');\\nfor (const [variant, runtimes] of Object.entries(runtimeData)) {\\n const average = runtimes.reduce((a, b) => a + b, 0) / runtimes.length;\\n console.log(`${variant}: ${average.toFixed(2)} ms`);\\n}\\nfunction measureRuntime() {\\n // Object.groupBy\\n start = performance.now();\\n const groupedOrdersGroupBy = Object.groupBy(orders, order => order.category);\\n end = performance.now();\\n runtimeData[\'Object.groupBy\'].push(end - start);\\n\\n // Array.reduce\\n // ...\\n\\n // Map.groupBy\\n // ...\\n\\n // Reduce with transformation\\n // ...\\n\\n // Object.groupBy + transformation\\n // ...\\n}\\n\\n
Here are the performance results:
\\nMethod | \\nChrome (ms) | \\nFirefox (ms) | \\nSafari (ms) | \\n
---|---|---|---|
Array.reduce | \\n443.32 | \\n262.04 | \\n1153.2 | \\n
Object.groupBy | \\n540.96 | \\n233.16 | \\n659.28 | \\n
Map.groupBy | \\n424.41 | \\n581.6 | \\n731.32 | \\n
reduce with transformation | \\n653.74 | \\n1125.6 | \\n1054.8 | \\n
Object.groupBy with transformation | \\n860.61 | \\n1993.92 | \\n1609.12 | \\n
Based on these results, Object.groupBy
was the fastest method on average, followed by Map.groupBy
. However, if you need to apply additional transformations while grouping, reduce
may be the better choice in terms of performance.
Object.groupBy
and Map.groupBy
have reached baseline status as of the end of 2024. To check specific browser compatibility, you can refer to their respective CanIUse pages (Object.groupBy
and Map.groupBy
).
If you need to support older browsers, you can use a shim/polyfill.
\\nWhen it comes to grouping data in JavaScript, choosing between the new data manipulation functions or Array.prototype.reduce
depends on the level of flexibility and transformation you need.
Object.groupBy
makes grouping more intuitive, but it’s limited if you need to modify data in the process. Map.groupBy
offers a more flexible structure, especially for mutable data, while reduce
remains the go-to method for complex or custom transformations.
The choice between the methods depends on the specific requirements of your use case.
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nWith web development constantly evolving, we’ve seen an increasing adaptation to client-server architecture in the ecosystem. While this architecture has many advantages, one of its most common challenges is the time frontend developers spend waiting for the backend team to implement new API endpoints or modify existing ones.
\\nWhile most teams propose a full-stack development approach to solving this challenge, API mocking has proven to be the most efficient solution.
\\nMocking allows frontend developers to simulate the responses and behaviors of a live API, such as error handling, timeouts, and specific status codes in real time. Chrome DevTools Local Overrides make this even easier without complex integrations or writing code.
\\nIn this tutorial, we will explore how to use Chrome DevTools Local Overrides for API mocking. We’ll mock API response data in a production website without accessing the backend. The tutorial will also cover how to bypass Cross-Origin Resource Sharing (CORS) issues without modifying the server code.
\\nBefore moving forward with this tutorial, you should have:
\\nOver the years, the JavaScript ecosystem has developed many libraries for mocking APIs on the browser and Node.js environments. This section will highlight the common tools and libraries for mocking APIs and the limitations of these libraries.
\\nMock Service Worker (MSW) is a popular API mocking library that supports both mocking in the browser and Node.js. It allows you to intercept requests and mock corresponding responses.
\\nIt supports mocking for RESTful, GraphQL, and WebSocket APIs.
\\nHere is an HTTP request handler for mocking a GET
request with MSW:
import { http, HttpResponse } from \'msw\'\\n\\nexport const handlers = [\\n http.get(\'https://example.com/user\', () => {\\n return HttpResponse.json({\\n id: \'c7b3d8e0-5e0b-4b0f-8b3a-3b9f4b3d3b3d\',\\n firstName: \'John\',\\n lastName: \'Maverick\',\\n })\\n }),\\n]\\n\\n
The handler intercepts GET https://example.com/user
requests and then responds with mock JSON data.
Limitations of MSW include:
\\nThe Axios Mock Adapter is an API mocking library specifically designed for mocking response headers and data for API calls made with Axios. It works on Node.js and also in a browser.
\\nHere is an HTTP request handler for mocking a GET
request with Axios Mock Adapter:
const axios = require(\\"axios\\");\\nconst AxiosMockAdapter = require(\\"axios-mock-adapter\\");\\n\\nconst mock = new AxiosMockAdapter(axios);\\nmock.onGet(\\"/users\\").reply(200, {\\n users: [{ id: 1, name: \\"John Smith\\" }],\\n});\\n\\n
mock.onGet
intercepts any GET
request to the specified endpoint and then responds with mock JSON data. It also receives arguments such as status, data, and headers.
Limitations of Axios Mock Adapter include:
\\nMirage JS is my favorite API mocking library. It’s designed to replicate your entire production API server. It ships with an in-memory database and lets you build out fully dynamic features, with data-fetching and persistence logic. Mirage JS uses factories to quickly simulate various server states.
\\nSince 2017, I have been developing critical customer-facing apps with Mirage JS without a backend; all I do is take user stories from our product manager and translate them into fully functional web apps. It is very handy, especially when working under tight deadlines to spin up MVPs efficiently.
\\nHere is an HTTP request handler for mocking a GET
request with Mirage JS:
import { createServer } from \\"miragejs\\"\\n\\ncreateServer({\\n routes() {\\n this.get(\\"/api/users\\", () => [\\n { id: \\"1\\", name: \\"Luke\\" },\\n { id: \\"2\\", name: \\"Leia\\" },\\n { id: \\"3\\", name: \\"Anakin\\" },\\n ])\\n },\\n})\\n\\n
Limitations of Mirage JS include:
\\nThis section will cover a brief overview of the tools available in Chrome DevTools, especially for network inspection, debugging, and testing. We’ll also discuss how these tools allow you to inspect, modify, and intercept API calls in real-time.
\\nImagine you’re dealing with an issue in your production environment. Maybe there’s a typo in an API call, or you’re running into an unexpected CORS error but don’t have the privilege to make changes in production. That’s where the DevTools Local Overrides feature comes in handy! It lets you temporarily modify resources and experiment with different scenarios right from your browser, without modifying your site in production.
\\nThe Network tab in Chrome DevTools is a powerful tool for monitoring and debugging network activity in web applications. It provides a detailed view of all network requests made by a webpage.
\\nChrome DevTools Local Overrides provides the following benefits, discussed in further detail below:
\\nThis section will cover how to simulate different API responses directly from the browser without needing external mocking services.
\\n\\nOpen DevTools in any production website of your choice, navigate to the Network panel, right-click a request you want to override, and choose Override content from the drop-down menu.
\\nWe intend to modify the scoreboard on the goal.com website:
\\nNow, DevTools will prompt you to Select a folder in which to store the override files:
\\nAfter selecting the folder, DevTools will prompt you to grant access rights to it. Click Allow to do so.
\\nNow, local overrides are set up and enabled. DevTools should take you to the Sources panel to let you make changes to web content.
\\nNow we can modify the scoreboard data and refresh the page to see the changes:
\\nFor situations where you are working on a data-driven frontend application and the backend API endpoints are not available, you can mock the API response directly in the browser. This is possible if you already know the data structure for the API response.
\\nHere is a React app that fetches data from the JSONPlaceholder Rest API:
\\nimport React, { useEffect, useState } from \'react\';\\nimport axios from \'axios\';\\nimport \'./App.css\';\\nconst App = () => {\\n const [users, setUsers] = useState([]);\\n const [loading, setLoading] = useState(true);\\n const [error, setError] = useState(null);\\n useEffect(() => {\\n const fetchUsers = async () => {\\n try {\\n const response = await axios.get(\'https://jsonplaceholder.typicode.com/footballers\');\\n setUsers(response.data);\\n setLoading(false);\\n } catch (err) {\\n setError(\'Failed to fetch data\');\\n setLoading(false);\\n }\\n };\\n fetchUsers();\\n }, []);\\n if (loading) return <p className=\\"loading\\">Loading...</p>;\\n if (error) return <p className=\\"error\\">Error: {error}</p>;\\n return (\\n <div className=\\"container\\">\\n <h1 className=\\"title\\">User List</h1>\\n <ul className=\\"userList\\">\\n {users.map((user) => (\\n <li key={user.id} className=\\"userCard\\">\\n <p><strong>Name:</strong> {user.name}</p>\\n <p><strong>Email:</strong> {user.email}</p>\\n <p><strong>Phone:</strong> {user.phone}</p>\\n <p><strong>Website:</strong> <a href={`https://${user.website}`} target=\\"_blank\\" rel=\\"noopener noreferrer\\">{user.website}</a></p>\\n </li>\\n ))}\\n </ul>\\n </div>\\n );\\n};\\nexport default App;\\n\\n
The JSONPlaceholder Rest API does not have the /footballers
endpoint that we called in our React app, so this will result in a 404 error. However, we can mock the response data for this endpoint in our React app while waiting for the JSONPlaceholder backend team to make this endpoint available.
Open Chrome DevTools, navigate to the Network tab and then reload the page. Notice the 404 error in the /footballers
endpoint. Right-click the /footballers
endpoint, and choose Override content from the drop-down menu:
You should see the following pane:
\\nNow you can add the following mock data in the above pane:
\\n[\\n {\\n \\"id\\": 1,\\n \\"name\\": \\"Leanne Graham\\",\\n \\"username\\": \\"Bret\\",\\n \\"email\\": \\"[email protected]\\",\\n \\"address\\": {\\n \\"street\\": \\"Kulas Light\\",\\n \\"suite\\": \\"Apt. 556\\",\\n \\"city\\": \\"Gwenborough\\",\\n \\"zipcode\\": \\"92998-3874\\",\\n \\"geo\\": {\\n \\"lat\\": \\"-37.3159\\",\\n \\"lng\\": \\"81.1496\\"\\n }\\n },\\n \\"phone\\": \\"1-770-736-8031 x56442\\",\\n \\"website\\": \\"hildegard.org\\",\\n \\"company\\": {\\n \\"name\\": \\"Romaguera-Crona\\",\\n \\"catchPhrase\\": \\"Multi-layered client-server neural-net\\",\\n \\"bs\\": \\"harness real-time e-markets\\"\\n }\\n },\\n {\\n \\"id\\": 2,\\n \\"name\\": \\"Ervin Howell\\",\\n \\"username\\": \\"Antonette\\",\\n \\"email\\": \\"[email protected]\\",\\n \\"address\\": {\\n \\"street\\": \\"Victor Plains\\",\\n \\"suite\\": \\"Suite 879\\",\\n \\"city\\": \\"Wisokyburgh\\",\\n \\"zipcode\\": \\"90566-7771\\",\\n \\"geo\\": {\\n \\"lat\\": \\"-43.9509\\",\\n \\"lng\\": \\"-34.4618\\"\\n }\\n },\\n \\"phone\\": \\"010-692-6593 x09125\\",\\n \\"website\\": \\"anastasia.net\\",\\n \\"company\\": {\\n \\"name\\": \\"Deckow-Crist\\",\\n \\"catchPhrase\\": \\"Proactive didactic contingency\\",\\n \\"bs\\": \\"synergize scalable supply-chains\\"\\n }\\n },\\n {\\n \\"id\\": 3,\\n \\"name\\": \\"Clementine Bauch\\",\\n \\"username\\": \\"Samantha\\",\\n \\"email\\": \\"[email protected]\\",\\n \\"address\\": {\\n \\"street\\": \\"Douglas Extension\\",\\n \\"suite\\": \\"Suite 847\\",\\n \\"city\\": \\"McKenziehaven\\",\\n \\"zipcode\\": \\"59590-4157\\",\\n \\"geo\\": {\\n \\"lat\\": \\"-68.6102\\",\\n \\"lng\\": \\"-47.0653\\"\\n }\\n },\\n \\"phone\\": \\"1-463-123-4447\\",\\n \\"website\\": \\"ramiro.info\\",\\n \\"company\\": {\\n \\"name\\": \\"Romaguera-Jacobson\\",\\n \\"catchPhrase\\": \\"Face to face bifurcated interface\\",\\n \\"bs\\": \\"e-enable strategic applications\\"\\n }\\n },\\n {\\n \\"id\\": 4,\\n \\"name\\": \\"Patricia Lebsack\\",\\n \\"username\\": \\"Karianne\\",\\n \\"email\\": \\"[email protected]\\",\\n \\"address\\": {\\n \\"street\\": \\"Hoeger Mall\\",\\n \\"suite\\": \\"Apt. 692\\",\\n \\"city\\": \\"South Elvis\\",\\n \\"zipcode\\": \\"53919-4257\\",\\n \\"geo\\": {\\n \\"lat\\": \\"29.4572\\",\\n \\"lng\\": \\"-164.2990\\"\\n }\\n },\\n \\"phone\\": \\"493-170-9623 x156\\",\\n \\"website\\": \\"kale.biz\\",\\n \\"company\\": {\\n \\"name\\": \\"Robel-Corkery\\",\\n \\"catchPhrase\\": \\"Multi-tiered zero tolerance productivity\\",\\n \\"bs\\": \\"transition cutting-edge web services\\"\\n }\\n }\\n]\\n\\n
Reload the page and you should have the mocked data rendered in your React app:
\\nYou can also override response headers, which is particularly useful for testing CORS issues or other security-related header changes.
\\nFor instance, if a page encounters a CORS error preventing data from loading, it often requires server-side adjustments. While waiting for the backend team to resolve the issue, you can temporarily modify the headers yourself to test and continue your development without delays:
\\nNavigate to the Network tab then right-click the endpoint with the CORS issue, and choose Override headers from the drop-down menu:
\\nClick Add header then add Access-Control-Allow-Origin
and set its value to *
:
Reload the page and the CORS error will be gone:
\\nChrome Local Override automatically persists mocked data and stores other overridden resources in your drive.
\\nNavigate to the Sources tab, then click Overrides. Right-click the overridden file, choose Open in containing folder from the drop-down menu:
\\nYou can right-click to open the folder and edit the files using your favorite code editor. Even better, you can sync the folder to a shared location, making it easy to collaborate and share with your colleagues.
\\nWhile Chrome DevTools allows you to mock APIs without writing any code, there are many scenarios where external API mocking libraries are a better choice:
\\nExternal libraries like MSW and Mirage JS allow you to programmatically mock APIs in unit and integration tests. Chrome DevTools doesn’t support test automation.
\\nChrome DevTools is limited to Chromium-based browsers only while external libraries are browser-agnostic.
\\nWhen working with CI/CD pipelines for test automation, external libraries are better choices as DevTools doesn’t support CI/CD workflows.
\\nIf you need to share your mock setups across teams, external libraries are preferable because it’s easy to share code via Git.
\\nIn this tutorial, we explored common API mocking libraries, their limitations, and how to use DevTools Local Overrides for API mocking in a production website without accessing the backend. We also covered how to bypass CORS issues without modifying the server code, and when to still use API mocking libraries.
\\nIf you’re prototyping UI designs without a backend or testing API fixes before they go live, you should consider trying out Chrome DevTools Local Override. You’ll be surprised at how much easier your tasks can be!
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nStackAuth is a powerful, open source alternative to Auth0, designed to offer seamless authentication solutions without the need for costly service subscriptions. With support for multiple authentication methods, including traditional email/password logins and third-party OAuth providers like Google, StackAuth enables Next.js developers to maintain control over their authentication systems while leveraging modern security standards.
\\nThis guide covers how to configure StackAuth in a Next.js application with three authentication sources: email/password, GitHub, and Google. We’ll also explore the benefits, use cases, and key differences between StackAuth and Auth0.
\\nTo follow along, you should have at least a foundational understanding of Next.js.
\\nAuth0 is a popular, cloud-based authentication platform that simplifies application authentication and authorization. It offers features like Single Sign-On (SSO), Multi-Factor Authentication (MFA), and social logins (e.g., Google, Facebook).
\\nWhile Auth0 is widely used, it comes with several pain points that can make it less appealing for certain use cases, including its high costs, limited customization, data ownership concerns, vendor lock-in, complexity for advanced use cases, and more.
\\nStackAuth addresses these pain points by offering a powerful, open source, and self-hosted authentication solution. Reasons to consider StackAuth as an alternative to Auth0:
\\nFeature | \\nStackAuth | \\nAuth0 | \\n
---|---|---|
Cost | \\nFree, open source | \\nSubscription required | \\n
Customization | \\nFull control over authentication | \\nLimited to service APIs | \\n
Hosting | \\nSelf-hosted | \\nCloud-based | \\n
Data Ownership | \\nComplete control | \\nManaged by Auth0 | \\n
Supported Frameworks | \\nNext.js (app router) | \\nMultiple frameworks | \\n
StackAuth is ideal for:
\\nTo get started, create a Next.js app and make sure your project uses the App Router, as StackAuth doesn’t support the Page Router:
\\nnpx create-next-app@latest stack-next-app --typescript\\ncd stack-auth-app\\n\\n
Now, install and configure StackAuth using the setup wizard:
\\nnpx @stackframe/init-stack@latest\\n\\n
The wizard detects your project structure and sets up StackAuth automatically. After the setup is completed, you will be redirected to the browser to create a handler account if you don’t already have one.
\\nAfter that, you will be redirected to a page to select the authentication options you want to use on your application. Give your project a name, and select Email password, Google, and GitHub by toggling them on. Then, click the Create project button:
\\nYou will now be provided with API keys that will enable you to authenticate your application. Keep these secure:
\\nCopy the environment variables into a .env.local
file:
NEXT_PUBLIC_STACK_PROJECT_ID=<your-project-id>\\nNEXT_PUBLIC_STACK_PUBLISHABLE_CLIENT_KEY=<your-publishable-client-key>\\nSTACK_SECRET_SERVER_KEY=<your-secret-server-key>\\n\\n
In the src/app/page.tsx
file, paste in the following code:
\\"use client\\";\\nimport { UserButton, useUser } from \\"@stackframe/stack\\";\\nexport default function Home() {\\n const user = useUser();\\n const loggedIn = user !== null;\\n return (\\n <div className=\\"flex flex-col gap-4 m-4\\">\\n <p>\\n Welcome to the Stack Auth demo!\\n </p>\\n <p>\\n Are you logged in? {loggedIn ? \\"Yes\\" : \\"No\\"}\\n </p>\\n <div>\\n <UserButton />\\n </div>\\n </div>\\n );\\n}\\n\\n
This code uses the useUser
Hook to retrieve the current user object and checks if the user is logged in by verifying that the user is not null
. It dynamically displays whether the user is logged in and includes the UserButton
component, which serves as an interactive avatar with options to manage account settings or sign out. This setup provides basic authentication feedback and user management capabilities that can be used on an application.
Stack is now successfully configured in your Next.js project! Start your Next.js app by running npm run dev
.
Now navigate to http://localhost:3000/handler/signup on your browser to access the Stack sign-up page:
\\nNow you can access your Stack dashboard, and you will be able to access the newly created accounts:
\\nNow that authentication is set up, this section will walk through setting up StackAuth authentication and building a simple note-taking app where users can create and view notes after signing in.
\\nIn the app/page.tsx
file, we will create a page that will:
\\"use client\\";\\nimport { useUser } from \\"@stackframe/stack\\";\\nimport { useState } from \\"react\\";\\n\\nexport default function Home() {\\n const user = useUser();\\n const [notes, setNotes] = useState<string[]>([]);\\n const [note, setNote] = useState(\\"\\");\\n\\n if (!user) {\\n return (\\n <div className=\\"flex flex-col items-center justify-center min-h-screen\\">\\n <h1 className=\\"text-xl\\">Please Sign In</h1>\\n <a href=\\"/signin\\" className=\\"text-blue-500 underline\\">\\n Go to Sign In\\n </a>\\n </div>\\n );\\n }\\n\\n return (\\n <div className=\\"max-w-xl mx-auto mt-10\\">\\n <h1 className=\\"text-2xl\\">Welcome, {user.email}!</h1>\\n\\n <div className=\\"mt-4\\">\\n <input\\n type=\\"text\\"\\n value={note}\\n onChange={(e) => setNote(e.target.value)}\\n className=\\"border p-2 w-full\\"\\n placeholder=\\"Write a note...\\"\\n />\\n <button\\n onClick={() => {\\n if (note.trim()) {\\n setNotes([...notes, note]);\\n setNote(\\"\\");\\n }\\n }}\\n className=\\"bg-blue-500 text-white px-4 py-2 mt-2 rounded\\"\\n >\\n Add Note\\n </button>\\n </div>\\n\\n <ul className=\\"mt-4\\">\\n {notes.map((n, index) => (\\n <li key={index} className=\\"border p-2 mt-2\\">\\n {n}\\n </li>\\n ))}\\n </ul>\\n </div>\\n );\\n}\\n
This code is a Next.js client-side component that creates the note-taking app.
\\nThe app first checks if a user is logged in by using the useUser
Hook from @stackframe/stack
. If no user is logged in, it displays a message asking the user to sign in and shows the UserButton
component, which allows the user to log in or sign up. This ensures that only authenticated users can access the note-taking features.
If a user is logged in, the app welcomes them by displaying their name (user.displayName
). It provides an input field where users can type notes, and the current input value is stored in the note
state variable using the useState
Hook. When the user clicks the Add Note button, the note is added to the notes
array (also stored in state) if it’s not empty. The input field is then cleared, allowing the user to add more notes.
Now, when you run your application and navigate to the home page, you will see that authentication is required before accessing the note-taking features:
\\nAfter logging in, you will be able to create notes using the form:
\\nIn app/handler/[...stack]/page.tsx
, StackAuth automatically provides pre-built pages for sign-up, sign-in, and account management, but you can customize your sign-in page and create a custom app/signin/page.tsx
file.
Here’s an example of a custom profile page that displays user information and allows users to sign out:
\\n\'use client\';\\nimport { useStackApp } from \\"@stackframe/stack\\";\\nexport default function CustomOAuthSignIn() {\\n const app = useStackApp();\\n return (\\n <div>\\n <h1>My Custom Sign In page1</h1>\\n <button onClick={async () => {\\n // this will redirect to the OAuth provider\'s login page\\n await app.signInWithOAuth(\'google\');\\n }}>\\n Sign In with Google\\n </button>\\n </div>\\n );\\n}\\n\\n
The same can be done for the sign-up page.
\\nNow you can tell the Stack app in stack.tsx
to use the sign-in page you just created:
export const stackServerApp = new StackServerApp({\\n // ...\\n // add these three lines\\n urls: {\\n signIn: \'/signin\',\\n }\\n});\\n\\n
Before deploying your application, it’s need to configure StackAuth for production to ensure security and optimal performance.
\\nIn production, you must specify your domain to prevent unauthorized callback URLs. Navigate to the Domain & Handlers tab in the Stack dashboard and add your domain (e.g., https://your-website.com
). Disable the Allow all localhost callbacks for development option to enhance security.
Replace the shared OAuth keys used in development with your own OAuth credentials. For each provider (e.g., Google, GitHub), create an OAuth app and configure the callback URL. Then, enter the client ID and client secret in the Stack dashboard under Auth Methods.
\\nFor production, set up your own email server to send emails from your domain. In the Stack dashboard, navigate to the Emails section, switch from the shared email server to a custom SMTP server, and enter your SMTP configurations.
\\nOnce all configurations are complete, enable production mode in the Project Settings tab of the Stack dashboard. This ensures your application runs securely with StackAuth in a production environment.
\\nWith its open source nature, StackAuth offers full control over user data, great customization options, and the ability to self-host for enhanced privacy and security.
\\n\\nBy following the steps outlined in this tutorial, you can set up authentication, configure environment variables, and leverage StackAuth’s pre-built and customizable components.
\\nFor production use, consider additional measures like rate limiting and token management to further enhance security. For more advanced use cases and detailed documentation, refer to the StackAuth SDK documentation.
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\npageswap
and pagereveal
events\\n The View Transition API brings page transitions and state-based UI changes — previously only possible with JavaScript frameworks — to the wider web.
\\nThis includes animating between DOM states in a single-page app (SPA) and animating the navigation between pages in a multi-page app (MPA). In other words, it brings view transitions to any type of website without bulky JavaScript dependencies and heady complexity. This is a win for users and developers! It is a game changer potentially.
\\nIn this article, I will focus on view transitions in MPAs. This is defined in the CSS View Transitions Module Level 2 specification and is referred to as cross-document view transitions. The cool thing is that the basics can be achieved without JavaScript — just a bit of declarative CSS will get you up and running! JavaScript is required when you want to implement some conditional logic.
\\nCross-document view transitions are now supported in both Chrome 126 and Safari 18.2. We can dive in straight away and use view transitions as a progressive enhancement today! 🙌
\\nView transitions improve the navigation experience. More specifically, they can:
\\nAll view transitions involve the following three steps:
\\nopacity: 1
to opacity: 0
while the new view animates from opacity: 0
to opacity: 1
The major difference between a view transition in a SPA and an MPA is how the transition is triggered. In an MPA, a view transition is triggered by navigating to another page — this can happen by clicking on a link or submitting a form. Navigations that don’t trigger a view transition include navigating using the URL address bar, clicking a bookmark, and reloading a page.
\\nIf a navigation takes too long, then the view transition is skipped, resulting in an error. Chrome’s limit is four seconds. It’s unclear what defines the start of a navigation in this context — is it that initial bytes have to be downloaded?
\\nInterestingly, a single page can have multiple view transitions. We can target specific subtrees of the DOM for transitions, and it’s even possible to nest them.
\\nThere are some conditions for enabling view transitions, which we’ll cover in the next section.
\\nTo enable view transitions for a website, two key conditions need to be met:
\\n@view-transition
CSS at-rule, as shown below:@view-transition {\\n navigation: auto;\\n}\\n\\n
With that, the default crossfade view transition should be enabled for the pages. Let’s look at an example.
\\nHere is a demo of a carousel featuring a set of photos. A carousel is a component that permits cycling through a set of content, such as photos in a photo gallery. In this example, I slowed down the animation to two seconds to highlight the effect (more on this later):
\\nView transitions allow us to create a carousel where each page is an item. Links to the next page and the previous page are all that is necessary in the CSS snippet above. There is no need to juggle items with code or to stuff a ton of images into a single page:
\\n<!-- index.html - first page--\x3e\\n<h1>Cape Town</h1>\\n<img src=\\"cape-town.webp\\" alt=\\"..\\"/>\\n<a href=\\"page2.html\\" class=\\"next\\"><img src=\\"/1-carousel/shared/img/arrow-right.svg\\" /></a>\\n\\n<!-- page2.html - second page--\x3e\\n<h1>Hong Kong</h1>\\n<img src=\\"hong-kong.webp\\" alt=\\"..\\"/>\\n<a href=\\"index.html\\" class=\\"previous\\"><img src=\\"/1-carousel/shared/img/arrow-left.svg\\" /></a>\\n<a href=\\"page3.html\\" class=\\"next\\"><img src=\\"/1-carousel/shared/img/arrow-right.svg\\" /></a> \\n\\n
Here is an overview figure of the pages involved so you can better understand what is happening:
\\nIt would be remiss of me not to mention that there are three other conditions for enabling view transitions. You are likely satisfying these conditions by default.
\\nThe fine print of the specification states that all of these conditions must be met:
\\nWe can customize the animation through pseudo-elements:
\\n::view-transition-group()
is used to reference a particular view transition::view-transition-old()
is used to reference the source view (outbound transition)::view-transition-new()
is used to reference the target view (inbound transition)For each of these pseudo-elements, we provide the view transition name as an argument to reference the view transition we are interested in. The default name for the view transition of a page is root
as it applies to the :root
element.
You can change the duration of the animation by putting the following on both pages:
\\n::view-transition-group(root) {\\n animation-duration: 3s;\\n}\\n\\n
To create a different transition effect, we can set animations for the source (old) and target (new) views separately.
\\n\\nFor example, let’s make a demo with a shrink animation. We will shrink the source page out of view, and have the target page expand into view:
We can do that with the following CSS:
\\n@keyframes shrink {\\n to {\\n scale: 0;\\n }\\n}\\n\\n::view-transition-old(root) {\\n animation: shrink 1s;\\n}\\n\\n::view-transition-new(root) {\\n animation: shrink 1s;\\n animation-direction: reverse;\\n}\\n\\n
Notice that we set animation-direction: reverse
on the target view transition; this makes the “shrink” animation expand!
Having opposite actions creates a symmetric effect, which can be pleasing. You don’t have to do this — you can treat each animation as a separate entity. You are free to come up with whatever tickles your fancy!
\\nFor cross-document view transitions, these pseudo-elements are only available on the target page. Don’t forget about this if you are making a view transition that goes in only one direction.
\\nLet’s see what else we can do with our view transitions — this time, introducing JavaScript!
\\nSo far, we have demonstrated that we can enable cross-document view transitions and customize the animations with CSS. This is powerful, but when we want to do more, we need JavaScript.
\\n\\nThe View Transition API does not cover all our needs. There are some complementary web features designed to be used in conjunction with it. They fall into the following categories:
\\npageswap
and pagereveal
events enable specifying conditional actions for view transitions. The pageswap
event is fired before the source page unloads, and the pagereveal
is fired before rendering the target pageNavigationActivation
objects that hold information about same-origin navigations. This saves developers the hassle of keeping track of this information themselves when they want to perform different animations/actions based on different URLsWe will discuss these features further with some examples.
\\npageswap
and pagereveal
eventsThe pageswap
and pagereveal
events give us the opportunity to perform some conditional logic for a view transition.
The pageswap
event is fired at the last moment before the source page is about to be unloaded and swapped by the target page. It can be used to find out whether a view transition is about to take place, customize it using types, make last-minute changes to the captured elements, or skip the view transition entirely.
The pagereveal
event is fired right before presenting the first frame of the target page. It can be used to act on different navigation types, make last-minute changes to the captured elements, wait for the transition to be ready in order to animate it, or skip it altogether.
In both of these events, you can access a ViewTransition
object using the ViewTransition
property. The ViewTransition
object represents an active view transition and provides functionality to react to the transition reaching different states e.g., when the animation is about to run, and when the animation has just finished.
Let’s look at an example to tie the concepts together.
\\nLet’s create a demo to allow the user to disable/enable view transitions. I will add a checkbox to our carousel in the top right corner. If it is checked, we will disable (skip) view transitions:
\\nWe need to modify our HTML to add our checkbox input
, and we need to add a script
tag to point to the script we are about to write. We must add the script tag as a parser-blocking script in the <head>
. This is because the pagereveal
event must execute before the first rendering opportunity. This means the script can’t be a module, can’t have the async
attribute, and can’t have the defer
attribute:
<!DOCTYPE html>\\n<html lang=\\"en\\">\\n <head>\\n <!-- other elements as before--\x3e\\n\\n <!-- our script must be here exactly like this--\x3e\\n <script src=\\"script.js\\"></script>\\n </head>\\n\\n <body>\\n <label>Skip?<input type=\\"checkbox\\" id=\\"skip\\" /></label>\\n\\n <!-- other elements as before--\x3e\\n </body>\\n</html>\\n\\n
In our script, we will add event handlers for the pageswap
and pagereveal
events. In the pageswap
event handler, we write the value of the checkbox (true or false) to session storage, saving it as the skip
variable.
Notice that I consult the ViewTransition
object to decide if we want to store the value or not. The ViewTransition
object is null if there is no view transition taking place. Therefore, this check will return true
when a view transition is taking place.
In the pagereveal
event handler, we read the value of the skip
variable from session storage. If skip
has a value of “true” (session storage saves all values as strings), then we skip the view transition by calling the ViewTransition.skipTransition()
function:
/* script.js */\\n\\n// Write to storage on old page\\nwindow.addEventListener(\\"pageswap\\", (event) => {\\n if (event.viewTransition) {\\n let skipCheckbox = document.querySelector(\\"#skip\\");\\n sessionStorage.setItem(\\"skip\\", skipCheckbox.checked);\\n }\\n});\\n\\n// Read from storage on new page\\nwindow.addEventListener(\\"pagereveal\\", (event) => {\\n if (event.viewTransition) {\\n let skip = sessionStorage.getItem(\\"skip\\");\\n let skipCheckbox = document.querySelector(\\"#skip\\");\\n\\n if (skip === \\"true\\") {\\n event.viewTransition.skipTransition();\\n skipCheckbox.checked = true;\\n } else {\\n skipCheckbox.checked = false;\\n }\\n }\\n});\\n\\n
We use the value from session storage in pagereveal
to persist the checkbox state in the target page. This maintains the checkbox state between page navigations. Remember that HTTP is a stateless protocol; it will forget everything about the previous page unless you tell it!
If you are not familiar with session storage, you can inspect session storage in Chrome’s DevTools. You will find it in the Application tab (as seen in the image below). On the sidebar under the Storage category, you will see a Session storage item. Click on it and you should see the origin of your website e.g., http://localhost:3000. Click on it and it will reveal all of the stored values:
\\nThe Application tab in Chrome DevTools with the Session storage item open that is contained under the Storage category in the sidebar
\\nIn the pageswap
and pagereveal
events, you can take actions based on the navigation that is taking place. This information is available through the NavigationActivation
object. This object exposes the used navigation type, the source page navigation history entry, and the target page navigation history entry. It is through these navigation history entries that we can get the URL of each page. At the time of writing, only Chrome supports the NavigationActivation
object.
Let’s make a demo to add a slide animation to our carousel. We want the following to happen:
\\nFor this scenario, you can use view transition types. You can assign one or more types to an active view transition through a Set
object available in the ViewTransition.types
property. For our example, when transitioning to a higher page in the sequence, we will assign the next
type, and when going to a lower page we assign the previous
type.
Each of the types can be referenced in CSS to assign different animations:
\\nOverview of the assigning of types for page navigations. The link pointing to a page lower in the sequence is assigned a previous type, and a link pointing to a page higher in the sequence is assigned a next type.
\\nSounds good, right? But how do we determine the type?
\\nIt’s up to you to determine the type!
\\nIn this case, I will inspect the URL of the source page and target page to identify their order. We can get the URL of the source page and target page from the NavigationActivation
object. It contains a from
attribute that represents the source page as a history entry, and an entry
attribute that represents the target page as a history entry.
Because we follow a naming convention for our files that indicates their order, we can use this to identify an index for each page. The order is as follows:
\\nindex.html
page2.html
page3.html
In our code, our determineTransitionType
function will compare the indexes of the source page and target page to determine if it is a previous
type or next
type:
window.addEventListener(\\"pageswap\\", async (e) => {\\n if (e.viewTransition) {\\n let transitionType = determineTransitionType(\\n e.activation.from.url,\\n e.activation.entry.url\\n );\\n\\n e.viewTransition.types.add(transitionType);\\n }\\n});\\n\\nwindow.addEventListener(\\"pagereveal\\", async (e) => {\\n if (e.viewTransition) {\\n // pagereveal does not expose the NavigationActivation object, we must get it from the global object\\n let transitionType = determineTransitionType(\\n navigation.activation.from.url,\\n navigation.activation.entry.url\\n );\\n\\n e.viewTransition.types.add(transitionType);\\n }\\n});\\n\\nfunction determineTransitionType(sourceURL, targetURL) {\\n const sourcePageIndex = getIndex(sourceURL);\\n const targetPageIndex = getIndex(targetURL);\\n\\n if (sourcePageIndex > targetPageIndex) {\\n return \\"previous\\";\\n } else if (sourcePageIndex < targetPageIndex) {\\n return \\"next\\";\\n }\\n\\n return \\"unknown\\";\\n}\\n\\nfunction getIndex(url) {\\n let index = -1;\\n let filename = new URL(url).pathname.split(\\"/\\").pop();\\n\\n if (filename === \\"index.html\\") {\\n index = 1;\\n }\\n\\n // extract a number from the filename\\n let numberMatches = /\\\\d+/g.exec(filename);\\n if (numberMatches && numberMatches.length === 1) {\\n index = numberMatches[0];\\n }\\n\\n return index;\\n}\\n\\n
In our stylesheet, we specify the four animations required. I was quite literal with their names:
\\n@keyframes slide-in-from-left {\\n from {\\n translate: -100vw 0;\\n }\\n}\\n\\n@keyframes slide-in-from-right {\\n from {\\n translate: 100vw 0;\\n }\\n}\\n\\n@keyframes slide-out-to-left {\\n to {\\n translate: -100vw 0;\\n }\\n}\\n\\n@keyframes slide-out-to-right {\\n to {\\n translate: 100vw 0;\\n }\\n}\\n\\n
We associate the animations with the view transition type with the :active-view-transition-type()
pseudo-class, and we provide the type as an argument.
For each type, we specify the animation for the source page with ::view-transition-old()
and the target page with ::view-transition-new()
:
::view-transition-group(root) {\\n animation-duration: 400ms;\\n}\\n\\nhtml:active-view-transition-type(next) {\\n &::view-transition-old(root) {\\n animation-name: slide-out-to-left;\\n }\\n\\n &::view-transition-new(root) {\\n animation-name: slide-in-from-right;\\n }\\n}\\n\\nhtml:active-view-transition-type(previous) {\\n &::view-transition-old(root) {\\n animation-name: slide-out-to-right;\\n }\\n\\n &::view-transition-new(root) {\\n animation-name: slide-in-from-left;\\n }\\n}\\n\\n
It takes a bit to get your head around all of this! But when you get used to it, you’ll be able to pull off a diverse range of animations for cross-document view transitions. That’s an exciting prospect!
\\nIn some cases, you may want to hold off the rendering of the target page until a certain element is present. This ensures the state you’re animating is stable:
\\n<link rel=\\"expect\\" blocking=\\"render\\" href=\\"#sidebar\\">\\n\\n
This ensures that the element is present in the DOM, however, it doesn’t wait until the content is fully loaded. If you are using this feature with images or videos that may take longer to load, you should factor this in.
\\nUse this feature wisely. Generally, we want to avoid blocking rendering! In my exploration of cross-document view transitions, I did not find a use case for this but it is good to be aware of its existence!
\\nThe browser support is strong for view transitions with both Chrome and Safari covering the majority of the APIs involved:
\\nFeature | \\nChrome | \\nSafari | \\n
---|---|---|
Cross-Document View Transitions | \\nv126+ | \\nv18.2+ | \\n
View transition types | \\nv125+ | \\nv18.2+ | \\n
PageRevealEvent | \\nv123+ | \\nv18.2+ | \\n
PageSwapEvent | \\nv124+ | \\nv18.2+ | \\n
NavigationActivation interface | \\nv123+ | \\n– | \\n
Render blocking | \\nv124+ | \\n– | \\n
Nested View Transition Groups | \\nEnable with #enable-experimental-web-platform-features flag | \\nv18.2+ | \\n
Auto View Transition Naming | \\nBehind #enable-experimental-web-platform-features flag | \\nv18.2+ | \\n
No matter how cool an animation looks, it can cause issues for people with vestibular disorders. For those users, you can choose to slow the animation down, pick a more subtle animation, or stop the animation altogether. We can use the prefers-reduced-motion
media query to achieve this.
The easiest way is to enable view transitions only for people who have no preference for reduced motion. For people with the preference, it is disabled by default:
\\n/* Enable view transitions for everyone except those who prefer reduced motion */\\n@media (prefers-reduced-motion: no-preference) {\\n @view-transition {\\n navigation: auto;\\n }\\n}\\n\\n
When working with cross-document view transitions, be careful if you are working with a hot reload development server. If pages are getting cached or the page is not being fully reloaded, then you may not see your changes reflected.
\\nI found the easiest way to ensure caching is not taking place is to have the dev tools open and select the Disable cache checkbox on the Network tab:
\\nAlso, if you are used to debugging using console.log()
or similar, this is not effective when you are working across two pages. With every navigation, the console log will be cleared. It is better to use sessionStorage
for logging if this is your preferred debugging method.
All of the demos I covered in this article can be found in this GitHub repo. I also included some of the demos prepared by the Chrome DevRel team.
\\nHere are links to the live pages of the demos mentioned:
\\nNavigation between pages must be within four seconds to see the view transitions.
\\nThe ability to add page transitions and state-based UI changes to any website is a significant step forward. Being able to apply view transitions without bulky JavaScript dependencies and the heady complexity of frameworks is good for users and developers! You can get up and running with some straightforward CSS. If you need conditional logic for a view transition, then you need to write some JavaScript code.
\\nGenerally, I’ve been impressed by the capability. However, I must admit that I struggled to understand aspects of using cross-document view transitions. The relationship between the View Transition API and companion APIs such as NavigationActivation
was not apparent from the explanations I read. Once you get over those comprehension hurdles, then you can write effective view transitions with JavaScript code of a moderate length.
The browser support is strong for the APIs related to view transitions with both Chrome and Safari covering the majority of them. In any case, you can use view transitions as a progressive enhancement. Be mindful that it is a new web feature, so you may stumble upon some issues.
\\nIt is also important to understand that cross-page view transitions require fast-loading pages. If a navigation takes over four seconds, Chrome will bail on the view transition. A complementary web feature that you can use to speed up navigation is prerendering. The Speculation Rules API is designed to improve performance for future navigations. These features point towards a faster and more capable web, but it will take people to build websites in a new fashion to realize the benefits.
\\nThe capabilities of view transitions are also expanding. Nested view transitions have been added recently and some experimental additions are being explored. The Chrome DevRel team has stated that they want to add more options for the conditions of navigation, maybe even permit cross-origin view transitions!
\\nGive view cross-document view transitions a try!
\\n\\n\\n\\n\\n
Hey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nThe latest release of .NET MAUI 9 includes many updates to Blazor, and the MAUI framework in general.
\\nThe improvements include new hybrid and web app templates, detection of component render mode, and an improved reconnection experience. Developers can take advantage of these improvements to build better user experiences that more accurately react and respond to changes in the application. The templates specifically are a great benefit to developers as they enable cross-platform projects right from the start.
\\nThis article will cover the upgrades and how they generally improve the Blazor experience with .NET MAUI.
\\nFormally known as the Multi-platform App UI, MAUI is the latest evolution of Xamarin Forms and is designed to be a streamlined approach to building cross-platform applications. With MAUI, you can build an application in one codebase and then deploy it to multiple platforms. This includes desktop, mobile, and now even web applications, all while using .NET C#. The platforms supported include:
\\nMAUI includes modern architecture features such as:
\\nMAUI also offers developer-friendly features like hot reloading, debugging, built-in templates, and even design with XAML UI. The framework also provides a streamlined approach to application development making it a great tool for teams that may not have larger organization support when building and deploying applications.
\\nBlazor is a free and open-source web framework developed by Microsoft. It allows you to build interactive applications using .NET C# instead of traditional JavaScript. As part of the .NET platform, Blazor is regularly supported by Microsoft. It provides different hosting models that serve different team requirements:
\\nBlazor Server runs applications server-side and handles UI updates with Microsoft’s SignalR service. All C# code in Blazor Server is executed server-side, and this offering tends to work well for internal corporate networks and applications.
\\nBlazor WebAssembly (WASM) runs applications entirely in the browser. The .NET runtime is downloaded to the client and can run offline once loaded. Blazor WASM is great for public-facing applications that need to scale and use larger runtime environments like .NET.
\\nBlazor Hybrid allows applications to run inside desktop or mobile apps. Referring back to MAUI, Blazor Hybrid is how MAUI enables teams to have a single codebase for applications on multiple platforms. Blazor Hybrid allows for native capabilities in application development.
\\nGenerally speaking, Blazor enables a component-based architecture similar to the larger frameworks like React and Angular.
\\n.NET developers can utilize their skills on the backend without needing as much frontend knowledge as would be normally required for something like React or Angular. Blazor lets developers utilize features like dependency injection and other .NET patterns for frontend applications.
\\nAs stated earlier, .NET MAUI enables teams to have a single codebase for application development. Teams do not have to utilize Blazor for MAUI projects, but if they do use Blazor, they can leverage the newest .NET 9 release benefits.
\\nUtilizing the new Hybrid and Web App templates of .NET MAUI Blazor enables developers to have a single shared codebase that can then be applied to different platforms like iOS and Android.
\\n\\nTo get started, follow the installation instructions for either Visual Studio (Windows) or VSCode (macOS). Once the installs have been accomplished, the templates are readily available. Here is what you would see in Visual Studio (if you are using a Mac, check out the VSCode extension):
\\nIf you want to take advantage of the Blazor Hybrid App and Web templates, choose .NET MAUI Blazor Hybrid and Web App. You should see something like the following created:
\\nYou’ll notice that there are three projects created:
\\nWithin the MAUI App, you’ll see that the MAUIProgram.cs
file builds with a FormFactor
depending on what is defined in the shared project:
public static class MauiProgram\\n {\\n public static MauiApp CreateMauiApp()\\n {\\n var builder = MauiApp.CreateBuilder();\\n builder\\n .UseMauiApp<App>()\\n .ConfigureFonts(fonts =>\\n {\\n fonts.AddFont(\\"OpenSans-Regular.ttf\\", \\"OpenSansRegular\\");\\n });\\n\\n // Add device-specific services used by the MauiApp3.Shared project\\n builder.Services.AddSingleton<IFormFactor, FormFactor>();\\n\\n builder.Services.AddMauiBlazorWebView();\\n\\n#if DEBUG\\n builder.Services.AddBlazorWebViewDeveloperTools();\\n builder.Logging.AddDebug();\\n#endif\\n\\n return builder.Build();\\n }\\n }\\n\\n
Similarly in the web project, you’ll notice that the Program.cs
file builds a Blazor Server project based on what was shared:
using MauiApp3.Shared.Services;\\nusing MauiApp3.Web.Components;\\nusing MauiApp3.Web.Services;\\n\\nvar builder = WebApplication.CreateBuilder(args);\\n\\n// Add services to the container.\\nbuilder.Services.AddRazorComponents()\\n .AddInteractiveServerComponents();\\n\\n// Add device-specific services used by the MauiApp3.Shared project\\nbuilder.Services.AddSingleton<IFormFactor, FormFactor>();\\n\\nvar app = builder.Build();\\n\\n// Configure the HTTP request pipeline.\\nif (!app.Environment.IsDevelopment())\\n{\\n app.UseExceptionHandler(\\"/Error\\", createScopeForErrors: true);\\n // The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.\\n app.UseHsts();\\n}\\n\\napp.UseHttpsRedirection();\\n\\napp.UseStaticFiles();\\napp.UseAntiforgery();\\n\\napp.MapRazorComponents<App>()\\n .AddInteractiveServerRenderMode()\\n .AddAdditionalAssemblies(typeof(MauiApp3.Shared._Imports).Assembly);\\n\\napp.Run();\\n\\n
If you open platforms within the MAUI app, you’ll see there are templates and handlers that do all of what is needed for whatever platform you are building for:
\\nAs a developer, this makes your life significantly easier. All you have to do is build out your components and application in the Shared
project and then select the platform you want to build for. VisualStudio (or the extensions for VSCode) will handle the work of running your building, running locally, and even deploying your code using the applicable simulators or binaries installed on your machine:
This is a pretty significant offering. Previously, you would have to build applications with a shared class library and then import that into your MAUI projects. Now with these templates, all of this is done for you from the start. Your development setup is greatly simplified and streamlined.
\\nIn addition to the built-in templates, the new release also includes several other optimizations that improve the experience of Blazor app development with MAUI. In the examples below, I’ll be providing code snippets that can originally be found in the Microsoft documentation on .NET 9. I recommend reviewing the .NET 9 docs for more information.
\\nThe new release includes a detection of component rendering mode. This allows developers to specify what flavor of Blazor they want to use when building. To utilize different modes across your application you first need to add support in the initial Program.cs
file so that you can use it in your project:
<Dialog @rendermode=\\"InteractiveServer\\" />\\n\\n
To do this, you’ll also need to specify a using value for the render mode like @using static Microsoft.AspNetCore.Components.Web.RenderMode
At the top of your Razor files in your Blazor application, you can specify the render mode:
\\n@page \\"...\\"\\n@rendermode InteractiveServer\\n\\n
You can also specify a render mode for the entire application route:
\\n<Routes @rendermode=\\"InteractiveServer\\" />\\n\\n
You can set the render mode programmatically in the component definition:
\\n<Routes @rendermode=\\"PageRenderMode\\" />\\n\\n...\\n\\n@code {\\n private IComponentRenderMode? PageRenderMode => InteractiveServer;\\n}\\n\\n
With these modes set, you’ve unlocked certain application behaviors, such as only displaying when a component is interactive:
\\n<button @onclick=\\"Send\\" disabled=\\"@(!RendererInfo.IsInteractive)\\">\\n Send\\n</button>\\n\\n
There are many more options to control your app behavior utilizing these rendering behaviors: prerendering, SSR, and automatic rendering. These all significantly improve the developer experience with .NET MAUI Blazor applications, as you can better control behaviors within the user experience.
\\nOne key issue with Blazor Server development has always been the reconnection experience. Blazor Server is built on top of SignalR, and utilizes connections with SignalR for application updates.
\\nWhen connections to the service would break, users would typically see a message asking for a refresh, or experience a general loss of data. This meant it was up to the developer to try and resolve connection issues or loss of data if this occurred.
\\nWith .NET MAUI 9 Blazor, the reconnection experience has been improved. There are several options for teams to customize how connections to SignalR are handled. When initially building your app in your Program.cs
file, you can specify options with interactivity:
builder.Services.AddRazorComponents().AddInteractiveServerComponents(options =>\\n{\\n options.{OPTION} = {VALUE};\\n});\\n\\n
With these options, you can specify what to do with idle timeouts and authentication windows. You can also log and handle connections with a CircuitId
like in this example from the Microsoft Docs:
using Microsoft.AspNetCore.Components.Server.Circuits;\\nusing Microsoft.Extensions.Options;\\nusing Timer = System.Timers.Timer;\\n\\npublic sealed class IdleCircuitHandler : CircuitHandler, IDisposable\\n{\\n private Circuit? currentCircuit;\\n private readonly ILogger logger;\\n private readonly Timer timer;\\n\\n public IdleCircuitHandler(ILogger<IdleCircuitHandler> logger, \\n IOptions<IdleCircuitOptions> options)\\n {\\n timer = new Timer\\n {\\n Interval = options.Value.IdleTimeout.TotalMilliseconds,\\n AutoReset = false\\n };\\n\\n timer.Elapsed += CircuitIdle;\\n this.logger = logger;\\n }\\n\\n private void CircuitIdle(object? sender, System.Timers.ElapsedEventArgs e)\\n {\\n logger.LogInformation(\\"{CircuitId} is idle\\", currentCircuit?.Id);\\n }\\n\\n public override Task OnCircuitOpenedAsync(Circuit circuit, \\n CancellationToken cancellationToken)\\n {\\n currentCircuit = circuit;\\n\\n return Task.CompletedTask;\\n }\\n\\n public override Func<CircuitInboundActivityContext, Task> CreateInboundActivityHandler(\\n Func<CircuitInboundActivityContext, Task> next)\\n {\\n return context =>\\n {\\n timer.Stop();\\n timer.Start();\\n\\n return next(context);\\n };\\n }\\n\\n public void Dispose() => timer.Dispose();\\n}\\n\\npublic class IdleCircuitOptions\\n{\\n public TimeSpan IdleTimeout { get; set; } = TimeSpan.FromMinutes(5);\\n}\\n\\npublic static class IdleCircuitHandlerServiceCollectionExtensions\\n{\\n public static IServiceCollection AddIdleCircuitHandler(\\n this IServiceCollection services, \\n Action<IdleCircuitOptions> configureOptions)\\n {\\n services.Configure(configureOptions);\\n services.AddIdleCircuitHandler();\\n\\n return services;\\n }\\n\\n public static IServiceCollection AddIdleCircuitHandler(\\n this IServiceCollection services)\\n {\\n services.AddScoped<CircuitHandler, IdleCircuitHandler>();\\n\\n return services;\\n }\\n}\\n\\n
The general approach is to try and handle the “circuit,” or connection to SignalR, gracefully. There are many good defaults built in, such as how many times to attempt a reconnect, or reconnection intervals. There is a nice write-up on this behavior in Jon Hilton’s post on the same topic.
\\n.NET MAUI 9 also offers UI updates so that what is shown to the user can have custom styles applied to it. Specifically, you now can see something like this (as seen in this Microsoft article):
\\nOverall, the new release of .NET MAUI 9 brings several great improvements for developers.
\\nThe hybrid and web templates are a great addition, helping to streamline the creation of new applications. The ability to control rendering allows Blazor developers to more tightly control their UI experience for their customers. Controlling reconnection states with SignalR is also a great win for teams, as they can create a more graceful way to handle this historical issue with Blazor applications.
\\nHopefully, this article has shown you some of the improvements and helped you to learn more about improvements with Blazor and MAUI in the new release of .NET 9. Thanks for reading!
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nReact Islands gives us a practical approach to integrating React components into existing web applications. This architectural pattern enables teams to introduce modern React features into legacy codebases without requiring a complete rewrite, allowing for incremental modernization while maintaining system stability.
\\nThink of it as dropping modern functionality into your older codebase without the headache of starting from scratch. It’s perfect for teams looking to modernize gradually while keeping their apps stable and running smoothly.
\\nIn this guide, we’ll walk through implementing React Islands step by step, using a real-world example. You’ll learn how to:
\\nThe core concept of React Islands involves selectively hydrating specific sections of the page while leaving the rest static. This approach minimizes the JavaScript payload, resulting in faster page loads and reduced browser evaluation time. It’s particularly beneficial for slower connections or less powerful devices.
\\nBefore we dive in, let’s talk about when this approach makes sense – and when it doesn’t.
\\nThe good
\\nReact Islands allow you to gradually modernize your app, avoiding the need for a complete rewrite. You can introduce React incrementally, focusing on areas where you need it while leaving the rest of the application intact. This minimizes risk and disruption.
\\nYou can absolutely expect performance improvements with React Islands — because you’re only hydrating what’s necessary, you get faster initial page loads. That’s all thanks to the burden lifted from not shipping a full React bundle for the entire page.
\\nOn the topic of hydration, let’s put some things into context. Traditionally, React hydration works by server-rendering the entire page. The client then loads React and rehydrates the entire page, making all components interactive at once. In contrast, the “Island Architecture” modifies this process by keeping most of the page static. Only specific components are hydrated, and hydration occurs independently for each island.
\\nFinally, these islands help with SEO, seeing as static content is readily available for search engines to crawl.
\\nThe not-so-good
\\nLet’s talk about the challenges you’ll face with React Islands. While it’s powerful, there are some real gotchas to watch out for.
State management gets messy since the islands are isolated from each other – they’re like separate mini-apps that can’t easily share states or communicate. You’ll need workarounds like custom events or global state management, which adds complexity. In another section, we’ll take a look at how a real-world implementation handles this.
\\n\\nLoading coordination is tricky because multiple islands might need the same dependencies. Without proper planning, you risk downloading duplicate code or running into race conditions where islands wait on each other’s resources.
\\nLayout shifts can ruin the user experience when your islands hydrate and change size. Your static HTML might render differently from the final interactive state, causing content to jump around during load.
\\nMeasuring performance becomes more nuanced — traditional metrics don’t capture the full picture since different parts of your page become interactive at different times.
\\nPerfect use cases
\\nIn this article from The Guardian, they discuss their approach to integrating React Islands and some of the challenges they encountered during the process. Let’s highlight some of the key takeaways and insights from their experience:
\\n1. Developer experience matters
\\nWhile initial implementations often focus on performance metrics, The Guardian‘s experience shows that developer experience is crucial for long-term success. Their second implementation achieved performance goals but was challenging to maintain:
This led to their third implementation, which prioritized developer experience alongside performance, providing “guard rails” to help developers naturally write performant code.
\\n\\n2. State management expectations
\\nOne unexpected benefit from the isolation inherent in the islands’ architecture? Simplified state management. By treating each island as a self-contained unit with a local state, codebases become easier to understand and maintain. This architectural constraint turned out to be a feature, not a bug.
Let’s address why “state management” takes a negative tone in the previous section but a positive one here. State isolation can be a benefit or a pitfall, depending on your architectural approach and requirements.
\\n\\nThe Guardian succeeded because they leaned into isolation, accepting some duplication as a reasonable trade-off. However, if your application requires significant state sharing between components, this same isolation becomes a complex challenge that you need to overcome.
\\n3. Data fetching considerations
\\nMultiple islands often need the same data, potentially leading to duplicate API calls. Solutions to this challenge include:
Let’s start with a simple product catalog page. Here’s our initial vanilla JavaScript version:
\\n<body>\\n <div class=\\"container\\">\\n <h1>Product Catalog</h1>\\n <div id=\\"search-container\\">\\n <input type=\\"text\\" id=\\"searchInput\\" placeholder=\\"Search products...\\">\\n </div>\\n <div id=\\"product-list\\" class=\\"product-list\\"></div>\\n </div>\\n</body>\\n\\n
Let’s add some basic JavaScript functionality:
\\n// Product data\\nconst products = [\\n { id: 1, name: \\"Laptop\\", price: 999.99 },\\n { id: 2, name: \\"Smartphone\\", price: 699.99 },\\n { id: 3, name: \\"Headphones\\", price: 199.99 }\\n];\\n\\n// Render products\\nfunction renderProducts(productsToRender) {\\n const productList = document.getElementById(\'product-list\');\\n productList.innerHTML = productsToRender.map(product => `\\n <div class=\\"product-card\\">\\n <h3>${product.name}</h3>\\n <p>$${product.price}</p>\\n </div>\\n `).join(\'\');\\n}\\n\\n// Initial render\\nrenderProducts(products);\\n\\n
Now comes the interesting part. Let’s learn how to upgrade our search functionality to use React.
\\n<script src=\\"https://unpkg.com/react@18/umd/react.development.js\\"></script>\\n<script src=\\"https://unpkg.com/react-dom@18/umd/react-dom.development.js\\"></script>\\n<script src=\\"https://unpkg.com/babel-standalone@6/babel.min.js\\"></script>\\n
function SearchIsland({ onSearch }) {\\n const [searchTerm, setSearchTerm] = React.useState(\'\');\\n\\n const handleSearch = (event) => {\\n const value = event.target.value;\\n setSearchTerm(value);\\n onSearch(value);\\n };\\n\\n return (\\n <div className=\\"search-island\\">\\n <input\\n type=\\"text\\"\\n value={searchTerm}\\n onChange={handleSearch}\\n placeholder=\\"Search products...\\"\\n />\\n <small>⚛️ React-powered search</small>\\n </div>\\n );\\n}\\n\\n
}
:function mountSearchIsland() {\\nconst searchContainer = document.getElementById(\'search-container\');\\nconst handleSearch = (searchTerm) => {\\n const filtered = products.filter(product =>\\n product.name.toLowerCase().includes(searchTerm.toLowerCase())\\n );\\n renderProducts(filtered);\\n};\\n\\nReactDOM.render(\\n <SearchIsland onSearch={handleSearch} />,\\n searchContainer\\n);\\n\\n}\\n\\n// Initialize\\nmountSearchIsland();\\n
Let’s make things more interesting by adding a product selection that communicates both ways:
\\nfunction ProductIsland({ product, onSelect }) {\\n const [isSelected, setIsSelected] = React.useState(false);\\n\\n const handleClick = () => {\\n setIsSelected(!isSelected);\\n onSelect(product, !isSelected);\\n };\\n\\n return (\\n <div \\n className={`product-card ${isSelected ? \'selected\' : \'\'}`}\\n onClick={handleClick}\\n >\\n <h3>{product.name}</h3>\\n <p>${product.price}</p>\\n {isSelected && <span>✓</span>}\\n </div>\\n );\\n}\\n\\n
Update your rendering logic:
\\nfunction renderProducts(productsToRender) {\\n const productList = document.getElementById(\'product-list\');\\n productList.innerHTML = \'\';\\n\\n productsToRender.forEach(product => {\\n const productContainer = document.createElement(\'div\');\\n ReactDOM.render(\\n <ProductIsland \\n product={product}\\n onSelect={(product, isSelected) => {\\n updateCart(product, isSelected);\\n }}\\n />,\\n productContainer\\n );\\n productList.appendChild(productContainer);\\n });\\n}\\n\\n
So, what have we learned? While it promises dramatic performance improvements, the reality is more nuanced. The Guardian‘s journey shows us that success isn’t just about technical implementation — it’s about embracing constraints and building around them.
\\nIs React Islands the future of web development? Probably not on its own. But it’s part of a bigger picture in how we think about building performant web applications. It gives us another powerful tool in our toolbox. The key is understanding when to use it and, just as importantly, when not to.
\\nThanks for reading, and happy coding!
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nThe gap between design and development often frustrates many teams. Designers use tools like Figma to create beautiful interfaces, but developers find it hard to turn these designs into working code using traditional IDEs. This disconnect sometimes slows down development and can lead to misunderstandings and costly fixes.
\\nOnlook is a new tool that helps bridge this gap by bringing design features directly into your development environment. With Onlook, you can improve your workflow, enhance teamwork between designers and developers, and deliver products faster and more accurately.
\\nIn this article, we’ll explore Onlook, a tool that brings design capabilities directly into the development environment. We will look at how it compares to existing tools like Figma and Webflow, and provide a practical guide to setting it up.
\\nOnlook is an open source visual editor designed specifically for React applications. It enables developers to create and modify user interfaces while working with their live React components. Onlook provides a direct manipulation interface similar to Figma, offering features like drag-and-drop editing, real-time preview, and seamless code generation.
\\nWith Onlook, developers can visually manipulate components, adjust styles, and see changes in real time, all while working within their application’s actual codebase. Its direct integration simplifies the handoff process between design and development, reducing inconsistencies and streamlining the overall workflow.
\\nUnlike traditional design tools or no-code builders, Onlook allows developers to directly integrate their existing React projects and maintain full development control while gaining the benefits of visual editing. Onlook is relatively new, with over 4.2k Github stars. It has over 40 contributors, which means they regularly ship updates and new features.
\\nOnlook is a standalone app, but to follow along with this tutorial, you will also need:
\\nTo get started with Onlook, follow these steps:
\\nOnce running, you will be presented with your application’s homepage.
\\nNow that we’ve got our app running, let’s dive into the Onlook interface before designing our first app.
\\nWhen you first open Onlook, you’ll see an interface that combines visual editing with code features. The interface has several key areas, each with a specific purpose in your development process.
\\nThe main workspace has three main sections:
\\nAdditionally, there are other important sections:
\\nThe Onlook interface connects design and development. Any design change you make updates the underlying React code immediately. Any code changes are also reflected in the design. This two-way relationship is what differentiates Onlook from traditional design tools.
\\nUnlike traditional workflows where you export designs to code, Onlook maintains a live connection between your visual edits and the React code. This allows for:
\\nLet’s break it down to the components. When you’re working on a component, you can:
\\nLet’s create a to-do list app using Onlook to show how the visual editor works with React. This example will highlight Onlook’s main features while building a real application that uses state management, user interactions, and component composition.
\\nWhen you start designing in Onlook, you have two options: you can either use the visual editor directly, or you can use Onlook’s AI assistant. Beginners or those who want guidance may find the AI assistant helpful. You can describe what you want, and the AI will help you create a basic layout that you can improve:
\\nThe visual editor lets you drag and drop elements, change styles, and see quick results. As you make changes, Onlook automatically writes the React code for you. At first, all the code goes into the page.tsx
file, which is a starting point, but not the best setup for a production app.
Structuring the application
\\nWhile Onlook’s automatic code generation is useful, real-world applications need a clearer structure. To address this, we can use component-based architecture. This approach will separate concerns and make the code easier to maintain.
Here’s how we will organize our project:
\\napp/\\n├── page.tsx\\n└── components/\\n ├── TodoContainer.tsx\\n ├── TodoForm.tsx\\n ├── TodoList.tsx\\n ├── TodoItem.tsx\\n └── types.ts\\n\\n
This structure follows React best practices by breaking the user interface into clear, reusable components. Each component, which we’ll demonstrate below, has a specific function, helping the code stay organized and easier to understand.
\\nOnlook’s Types
component:
// types.ts\\nexport interface Todo {\\n id: number;\\n text: string;\\n completed: boolean;\\n}\\n\\n
Onlook’s Page
component:
// page.tsx\\nimport TodoContainer from \'./components/TodoContainer\';\\n\\nexport default function Page() {\\n return (\\n <div className=\\"w-full min-h-screen bg-gradient-to-br from-purple-50 to-white dark:from-gray-900 dark:to-gray-800 p-4\\">\\n <div className=\\"max-w-2xl mx-auto\\">\\n <div className=\\"text-center mb-8\\">\\n <h1 className=\\"text-4xl font-bold text-purple-600 dark:text-purple-400 mb-2\\">\\n Todo List\\n </h1>\\n <p className=\\"text-gray-600 dark:text-gray-300\\">\\n Stay organized and productive\\n </p>\\n </div>\\n <TodoContainer />\\n </div>\\n </div>\\n );\\n}\\n\\n
Onlook’s TodoContainer
component:
// components/TodoContainer.tsx\\n\'use client\';\\n\\nimport { useState } from \'react\';\\nimport TodoForm from \'./TodoForm\';\\nimport TodoList from \'./TodoList\';\\nimport { Todo } from \'./types\';\\n\\nexport default function TodoContainer() {\\n const [todos, setTodos] = useState<Todo[]>([]);\\n const [newTodo, setNewTodo] = useState(\'\');\\n\\n const addTodo = (e: React.FormEvent) => {\\n e.preventDefault();\\n if (newTodo.trim()) {\\n setTodos([\\n ...todos,\\n {\\n id: Date.now(),\\n text: newTodo.trim(),\\n completed: false,\\n },\\n ]);\\n setNewTodo(\'\');\\n }\\n };\\n\\n const toggleTodo = (id: number) => {\\n setTodos(\\n todos.map((todo) => (todo.id === id ? { ...todo, completed: !todo.completed } : todo)),\\n );\\n };\\n\\n const deleteTodo = (id: number) => {\\n setTodos(todos.filter((todo) => todo.id !== id));\\n };\\n\\n return (\\n <>\\n <TodoForm\\n newTodo={newTodo}\\n setNewTodo={setNewTodo}\\n addTodo={addTodo}\\n />\\n <TodoList\\n todos={todos}\\n toggleTodo={toggleTodo}\\n deleteTodo={deleteTodo}\\n />\\n </>\\n );\\n}\\n\\n
Onlook’s TodoForm
component:
// components/TodoForm.tsx\\n\'use client\';\\n\\ninterface TodoFormProps {\\n newTodo: string;\\n setNewTodo: (value: string) => void;\\n addTodo: (e: React.FormEvent) => void;\\n}\\n\\nexport default function TodoForm({ newTodo, setNewTodo, addTodo }: TodoFormProps) {\\n return (\\n <form onSubmit={addTodo} className=\\"mb-8\\">\\n <div className=\\"flex gap-2\\">\\n <input\\n type=\\"text\\"\\n value={newTodo}\\n onChange={(e) => setNewTodo(e.target.value)}\\n placeholder=\\"What needs to be done?\\"\\n className=\\"flex-1 p-3 rounded-lg border border-gray-300 dark:border-gray-600 \\n bg-white dark:bg-gray-700 text-gray-900 dark:text-white\\n focus:ring-2 focus:ring-purple-500 focus:border-transparent\\"\\n />\\n <button\\n type=\\"submit\\"\\n className=\\"px-6 py-3 bg-purple-600 text-white rounded-lg\\n hover:bg-purple-700 focus:outline-none focus:ring-2\\n focus:ring-purple-500 focus:ring-offset-2\\n transition-colors duration-200\\"\\n >\\n Add\\n </button>\\n </div>\\n </form>\\n );\\n}\\n\\n
Onlook’s TodoList
component:
// components/TodoList.tsx\\n\'use client\';\\n\\nimport { Todo } from \'./types\';\\nimport TodoItem from \'./TodoItem\';\\n\\ninterface TodoListProps {\\n todos: Todo[];\\n toggleTodo: (id: number) => void;\\n deleteTodo: (id: number) => void;\\n}\\n\\nexport default function TodoList({ todos, toggleTodo, deleteTodo }: TodoListProps) {\\n return (\\n <div className=\\"bg-white dark:bg-gray-800 rounded-xl shadow-lg p-6\\">\\n {todos.length === 0 ? (\\n <p className=\\"text-center text-gray-500 dark:text-gray-400 py-8\\">\\n No todos yet. Add one above!\\n </p>\\n ) : (\\n <ul className=\\"space-y-3\\">\\n {todos.map((todo) => (\\n <TodoItem\\n key={todo.id}\\n todo={todo}\\n toggleTodo={toggleTodo}\\n deleteTodo={deleteTodo}\\n />\\n ))}\\n </ul>\\n )}\\n\\n {todos.length > 0 && (\\n <div className=\\"mt-6 pt-6 border-t border-gray-200 dark:border-gray-700\\">\\n <div className=\\"flex justify-between text-sm text-gray-600 dark:text-gray-400\\">\\n <span>{todos.filter((t) => !t.completed).length} items left</span>\\n <span>{todos.filter((t) => t.completed).length} completed</span>\\n </div>\\n </div>\\n )}\\n </div>\\n );\\n}\\n\\n
Onlook’s TodoItem
component:
// components/TodoItem.tsx\\n\'use client\';\\n\\nimport { Todo } from \'./types\';\\n\\ninterface TodoItemProps {\\n todo: Todo;\\n toggleTodo: (id: number) => void;\\n deleteTodo: (id: number) => void;\\n}\\n\\nexport default function TodoItem({ todo, toggleTodo, deleteTodo }: TodoItemProps) {\\n return (\\n <li className=\\"flex items-center gap-3 bg-gray-50 dark:bg-gray-700/50 \\n p-4 rounded-lg group\\">\\n <input\\n type=\\"checkbox\\"\\n checked={todo.completed}\\n onChange={() => toggleTodo(todo.id)}\\n className=\\"w-5 h-5 rounded border-gray-300 \\n text-purple-600 focus:ring-purple-500\\"\\n />\\n <span\\n className={`flex-1 ${\\n todo.completed\\n ? \'line-through text-gray-400 dark:text-gray-500\'\\n : \'text-gray-700 dark:text-gray-200\'\\n }`}\\n >\\n {todo.text}\\n </span>\\n <button\\n onClick={() => deleteTodo(todo.id)}\\n className=\\"opacity-0 group-hover:opacity-100 transition-opacity\\n text-red-500 hover:text-red-600 p-1\\"\\n >\\n <svg\\n xmlns=\\"http://www.w3.org/2000/svg\\"\\n className=\\"h-5 w-5\\"\\n viewBox=\\"0 0 20 20\\"\\n fill=\\"currentColor\\"\\n >\\n <path\\n fillRule=\\"evenodd\\"\\n d=\\"M9 2a1 1 0 00-.894.553L7.382 4H4a1 1 0 000 2v10a2 2 0 002 2h8a2 2 0 002-2V6a1 1 0 100-2h-3.382l-.724-1.447A1 1 0 0011 2H9zM7 8a1 1 0 012 0v6a1 1 0 11-2 0V8zm5-1a1 1 0 00-1 1v6a1 1 0 102 0V8a1 1 0 00-1-1z\\"\\n clipRule=\\"evenodd\\"\\n />\\n </svg>\\n </button>\\n </li>\\n );\\n}\\n\\n
One useful feature of Onlook is its ability to link the visual editor and the code in both directions. This is done using a special data-oid
attribute that Onlook adds to components during development:
<TodoContainer data-oid=\\"zshvwmp\\"/>\\n\\n
This attribute helps Onlook keep track of changes to components, connect visual elements with the code, allow real-time updates in both directions and maintain component functions while editing visually.
\\nWhile you build your application, you can use Onlook’s visual editor to improve the design without hurting the quality of your code. For instance, you can change component layouts, adjust spacing, refine colors and fonts, and test how your app responds to different screen sizes:
\\nWeb development tools meet different needs and workflows. They offer solutions for both design and development. To understand where Onlook fits in, we’ll compare it to other tools, highlighting its strengths and weaknesses.
\\nOnlook vs. Figma: Comparing traditional design tools
\\nTraditional design tools are strong at illustration, collaboration, prototyping, and design asset management. Onlook offers something different by allowing direct manipulation of React components, real-time interaction with component states and props, production-ready code, seamless integration with development workflows, and active testing of component behavior during design.
Onlook vs. Webflow: Comparing no-code builders
\\nNo-code platforms offer visual development and code export, they are strong at rapid prototyping, visual CMS integration, and built-in hosting. Onlook is different by being directly integrated into existing React codebases, maintaining component logic, and being developer-first while allowing visual editing. It integrates with development tools and supports custom components with full React features, and clean React code.
Onlook improves workflow across the React ecosystem. It improves cohesion between design and development teams and allows users to edit React components visually while keeping the code intact. By syncing visual changes with the code in real time, Onlook resolves a variety of common workflow issues. In addition, its AI features and smooth integration with existing React projects make it a useful addition to modern development processes.
\\n\\nAlthough Onlook is still being improved, it shows great potential to change how React applications are built. As it adds new features, it could become a vital tool for React developers looking to enhance their workflows and create high-quality user experiences more easily. Check out Onlook and try out its features in your projects.
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nyield
operator\\n JavaScript generators allow you to easily define iterators and write code that can be paused and resumed, allowing you to control the execution flow. In this article, we’ll explore how generators let you “yield” from a function to manage state, report progress on long-running operations, and make your code more readable.
\\nWhile many developers immediately reach for tools like RxJS or other observables to handle asynchronous tasks, generators are often overlooked — and they can be surprisingly powerful.
\\nWe’ll compare generators with popular solutions like RxJS, showing you where generators shine and where more conventional approaches might be a better fit. Without further ado, let’s get started!
\\nSimply put, generators are special functions in JavaScript that can be paused and resumed, allowing you to control the flow of execution more precisely than you can with regular functions.
\\nA generator function is a special type of function that returns a generator object and conforms to the iterable protocol and the iterator protocol.
\\nGenerators were first introduced in ES6 and have since become a fundamental part of the language. They are defined using the function keyword suffixed with an asterisk like: function*
. Here’s an example:
function* generatorFunction() {\\n return \\"Hello World\\"; //generator body\\n}\\n
Sometimes, you might find the asterisks prefixed to the function name like so *function
. While this syntax is less common, it is still valid.
At first glance, a generator might look like a normal function (minus the asterisk), but some important differences make them uniquely powerful.
\\nIn a standard function, once you call it, the function runs from start to finish. There’s no way to pause halfway and then pick back up again. Generators, on the other hand, allow you to pause execution at any yield
point.
This pausable nature also preserves state between each pause, making generators perfect for scenarios where you need to keep track of what’s happened so far — like processing a large dataset in chunks. Additionally, when a normal function is called, it runs to completion and returns a value. However, when you call generator functions, they return a generator object. This object is an iterator used for looping through a sequence of values.
\\nWhen you work with a generator, you don’t just call it once and forget about it. Instead, you use methods like next()
, throw()
, and return()
to control its state from the outside:
next(value)
: Resumes the generator and can pass a value back into the generator, which is received by the last yield
expression. The next()
method returns an object with value
and done
properties. value
represents the returned value, and done
indicates whether the iterator has run through all its valuesthrow(error)
: Throws an error inside the generator, effectively letting you handle exceptions in a more controlled mannerreturn(value)
: Ends the generator early, returning the specified valueThis two-way communication is a big step up from regular functions and can be used to build more sophisticated workflows, including error handling and partial data processing.
\\nTo begin, let’s initialize the Hello World generator function we showed earlier and retrieve its value:
\\nconst generator = generatorFunction();\\n
When you call generatorFunction()
and store it in a variable, you won’t see the \\"Hello World\\"
string right away. Instead, you get what’s called a generator object, and it’s initially in a “suspended” state, meaning it’s paused and hasn’t run any code yet.
If you try logging generator
, you’ll see it’s not a plain string. It’s an object representing the paused generator. To get the value of the generator function, we need to call the next()
method on the generator object:
const result = generator.next();\\n
This will give us the following output:
\\n{ value: \'Hello World\', done: true }\\n
It returned our “Hello World” string as the value of the object key, and the state done
equal to true because there was no more code to execute. As a result, the status of the generator function changes from suspended to closed.
So far, we’ve only seen how to return a single value from a generator function. But what if we want to return multiple values? This is where the yield
operator comes in.
yield
operatorJavaScript generators allow you to pause and resume function execution using the yield
keyword. For example, imagine you have a generator function like this:
function* generatorFunction() {\\n yield \\"first value\\";\\n yield \\"second value\\";\\n yield \\"third value\\";\\n yield \\"last value\\";\\n}\\n
Each time you call next()
on the generator, the function runs until it hits a yield
statement, and then it pauses. At that point, the generator returns an object with two properties:
value
: The actual value you’re yieldingdone
: A Boolean indicating whether the generator is finishedAs long as there’s another yield
(or until it hits a return
), done
will be false
. Once the generator has no more yield
statements left, done
becomes true
.
Expanding on the above example, if we call the next()
method four times, we’ll get the following output:
const generator = generatorFunction();\\n\\ngenerator.next(); // { value: \'first value\', done: false }\\ngenerator.next(); // { value: \'second value\', done: false }\\ngenerator.next(); // { value: \'third value\', done: false }\\ngenerator.next(); // { value: \'last value\', done: true }\\n
Notice how the first three calls to next()
each return a new value with done: false
. By the fourth call, the generator has run out of yield
statements, so it returns done: true.
What’s really cool is that yield
isn’t just about returning values — like a two-way street, it can also receive them from wherever the generator is being called, giving you two-way communication between the generator and its caller.
To pass a value to a generator function, we can call the next()
method with an argument. Here’s a simple example:
function* generatorFunction() {\\n console.log(yield);\\n console.log(yield);\\n}\\n\\nconst generator = generatorFunction();\\n\\ngenerator.next(); // First call — no yield has been paused yet, so nothing to pass in\\ngenerator.next(\\"first input\\");\\ngenerator.next(\\"second input\\");\\n
This would log the following sequentially:
\\nfirst input\\nsecond input\\n
See how the first call to generator.next()
doesn’t print anything? That’s because, at that point, there isn’t a paused yield
ready to accept a value. By the time we call generator.next(\\"first input\\")
, there’s a suspended yield
waiting, so \\"first input\\"
gets logged. The same pattern follows for the third call.
This is exactly how generators allow you to pass data back and forth between the caller and the generator itself.
\\nThe arrival of ECMAScript 2017 introduced async generators, a special kind of generator function that works with promises. Thanks to async generators, you’re no longer limited to synchronous code in your generators. You can now handle tasks like fetching data from an API, reading files, or anything else that involves waiting on a promise.
\\nHere’s a simple example of an async generator function:
\\nasync function* asyncGenerator() {\\n yield await Promise.resolve(\\"1\\");\\n yield await Promise.resolve(\\"2\\");\\n yield await Promise.resolve(\\"3\\");\\n}\\n\\nconst generator = asyncGenerator();\\nawait generator.next(); // { value: \'1\', done: false }\\nawait generator.next(); // { value: \'2\', done: false }\\nawait generator.next(); // { value: \'3\', done: true }\\n
The main difference is that you have to use await
on each generator.next()
call to retrieve the value, because everything is happening asynchronously.
We can further demonstrate how to use async generators to view paginated datasets from a remote API. This is a perfect use case for async generators as we can encapsulate our sequential iteration logic in a single function. For this example, we’ll use the free DummyJSON API to fetch a list of paginated products.
\\nTo get data from this API, we can make a GET request to the following endpoint. We’ll pass limit and skip params to limit and skip the results for pagination:
\\nhttps://dummyjson.com/products?limit=10&skip=0\\n
A sample response from this endpoint might look like this:
\\n{\\n \\"products\\": [\\n {\\n \\"id\\": 1,\\n \\"title\\": \\"Annibale Colombo Bed\\",\\n \\"price\\": 1899.99\\n },\\n {...},\\n // 10 items\\n ],\\n \\"total\\": 194,\\n \\"skip\\": 0,\\n \\"limit\\": 10\\n}\\n
To load the next batch of products, you just increase skip
by the same limit
until you’ve fetched everything.
With that in mind, this is how we can implement a custom generator function to fetch all the products from the API:
\\nasync function* fetchProducts(skip = 0, limit = 10) {\\n let total = 0;\\n\\n do {\\n const response = await fetch(\\n `https://dummyjson.com/products?limit=${limit}&skip=${skip}`,\\n );\\n const { products, total: totalProducts } = await response.json();\\n\\n total = totalProducts;\\n skip += limit;\\n yield products;\\n } while (skip < total);\\n}\\n
Now we can iterate over it to get all the products using the for await...of
loop:
for await (const products of fetchProducts()) {\\n for (const product of products) {\\n console.log(product.title);\\n }\\n}\\n
It will log the products until there is no more data to fetch:
\\nEssence Mascara Lash Princess\\nEyeshadow Palette with Mirror\\nPowder Canister\\nRed Lipstick\\nRed Nail Polish\\n... // 15 more items\\n
By wrapping the entire pagination logic in an async generator, your main code remains clean and focused. Whenever you need more data, the generator transparently fetches and yields the next set of results, making pagination feel like a straightforward, continuous stream of data.
\\n\\nWhile generators can be used as simple state machines (they remember where they left off each time), they aren’t always the most practical choice — especially when you consider the robust state management tools offered by most modern JavaScript frameworks.
\\nIn many cases, the extra code and complexity of implementing a state machine with generators can outweigh the benefits.
\\nIf you still want to explore this approach, you might look into the Actor model, which originates from the Erlang programming language. Although the details go beyond the scope of this article, the Actor model is often more effective for managing state.
\\nIn this model, “actors” are independent entities that encapsulate their own state and behavior, and communicate exclusively through message passing. This design ensures that state changes happen only within the actor itself, making the system more modular and easier to reason about.
\\nWhen it comes to processing streams of data, both JavaScript generators and RxJS are great tools, but each comes with its strengths and weaknesses. Lucky for us, they aren’t mutually exclusive so we can use both.
\\nTo demonstrate this, let’s imagine we have an endpoint that returns a multiple randomized 8-character string as a stream. For the first step, we can use a generator function to lazily yield chunks of data as we fetch it from the stream:
\\n// Fetch data from HTTP stream\\nasync function* fetchStream() {\\n const response = await fetch(\\"https://example/api/stream\\");\\n const reader = response.body?.getReader();\\n if (!reader) throw new Error();\\n\\n try {\\n while (true) {\\n const { done, value } = await reader.read();\\n if (done) break;\\n yield value;\\n }\\n } catch (error) {\\n throw error;\\n } finally {\\n reader.releaseLock();\\n }\\n}\\n
Calling fetchStream()
returns an async generator. You can then iterate over these chunks using a loop — or, as we’ll see next, harness RxJS to add some stream-processing superpowers.
RxJS provides a rich set of operators — like map
, filter
, and take
– that help you transform and filter asynchronous data flows. To use them with your async generator, you can convert the generator into an observable using RxJS’s from
operator.
We’ll now use the take
operator to filter the first five chunks of data:
import { from, take } from \\"rxjs\\";\\n\\n// Consume HTTP stream using RxJS\\nasync () => {\\n from(fetchStream())\\n .pipe(take(5))\\n .subscribe({\\n next: (chunk) => {\\n const decoder = new TextDecoder();\\n console.log(\\"Chunk:\\", decoder.decode(chunk));\\n },\\n complete: () => {\\n console.log(\\"Stream complete\\");\\n },\\n });\\n};\\n
If you are new to RxJS, the from
operator converts the async generator into an observable. This allows us to subscribe and access the fetched data as if it were synchronous. Looking at our log output after decoding, we should be able to see the first five chunks of the stream:
Chunk: ky^p1egh\\nChunk: 1q)zIz43\\nChunk: xm5aJGSX\\nChunk: GSx6a2UQ\\nChunk: GFlwWPu^\\nStream complete\\n
Alternatively, you could consume the stream using a for await...of
loop:
// Consume the HTTP stream using for-await-of\\nfor await (const chunk of fetchStream()) {\\n const decoder = new TextDecoder();\\n console.log(\\"Chunk:\\", decoder.decode(chunk));\\n}\\n
But with this approach, we miss out on RxJS operators, which make it easier to manipulate the stream in more flexible ways. For example, we can’t use the take
operator to limit the number of chunks we want to consume.
However, this limitation won’t last forever. With Iteration Helpers proposed for the next version of ECMAScript (currently Stage 4), you’ll eventually be able to do things like limiting or transforming generator output natively — similar to what RxJS already does for observables.
\\nFor more complex asynchronous flows, RxJS still offers a robust toolkit that won’t be easily replaced by native iteration helpers anytime soon:
\\nAspect | \\nGenerators | \\nRxJS (Observables) | \\n
Programming model | \\nPull-based: Consumer calls next() to retrieve data | \\nPush-based: Data is emitted to subscribers when available | \\n
Built-in vs. Library | \\nNative to JavaScript (no extra dependency) | \\nRequires RxJS library | \\n
Ease of Use | \\nRelatively straightforward for sequential flows, but can be unfamiliar | \\nSteeper learning curve due to extensive API (operators, subscriptions) | \\n
Data glow | \\nYields one value at a time, pausing between yields | \\nCan emit multiple values over time, pushing them to subscribers | \\n
Operators and transformations | \\nMinimal (manual iteration, no built-in transformations) | \\nRich operator ecosystem (map , filter , merge , switchMap , etc.) | \\n
Scalability | \\nCan become cumbersome with multiple streams or complex branching | \\nDesigned for large-scale, reactive architectures, and multiple streams | \\n
Performance considerations | \\nLightweight for simpler tasks (no external overhead) | \\nEfficient for real-time or complex pipelines, but adds library overhead | \\n
When to choose | \\nIf you need fine-grained control, simpler iteration, fewer transformations | \\nIf you need robust data stream handling, built-in ops, or event pipelines | \\n
JavaScript generators offer a powerful and often overlooked way to handle asynchronous operations, manage state, and process data streams. By allowing you to pause and resume execution, they enable a more precise flow of control compared to regular functions — especially when you need to tackle long-running tasks or iterate over large datasets.
\\nWhile generators excel in many scenarios, tools like RxJS provide a powerful ecosystem of operators that can streamline complex, event-driven flows.
\\nBut there’s no need to pick sides: you can combine the elegance of generators with RxJS’s powerful transformations, or even just stick to a simple for await...of
loop if that suits your needs.
Looking ahead, new iteration helpers may bring generator capabilities closer to those of RxJS — but for the foreseeable future, RxJS remains a staple for handling intricate reactive patterns.
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nProper system architecture and how you shape it is an often overlooked yet crucial measure of success for most modern software companies. The most successful software products today share a common trait: a well-thought-out division of systematic assets and resources.
\\nMicro-frontends have become an effective way to break down monolithic frontend applications into smaller, manageable parts. This approach makes your applications scalable and empowers your team to address complex challenges so that they can deliver high-quality solutions more consistently.
\\nwebpack Module Federation is a tool that enables independent applications to share code and dependencies. In this article, we’ll dive into how Module Federation works, its importance in micro-frontends, and strategies to tackle common integration challenges effectively.
\\nwebpack Module Federation, introduced in webpack 5, is a feature that allows JavaScript applications to share code and dynamically load modules during runtime.
\\nThis modern approach to sharing dependencies eliminates the need for duplication and provides the flexibility to share libraries and dependencies across different applications without creating redundant code. This way, the apps load only the necessary code at runtime.
\\nModule Federation introduces the concept of a host application and remote application:
\\nHere’s an example host configuration to demonstrate how you can set up a host and a remote using the ModuleFederationPlugin
in webpack:
plugins: [\\n new ModuleFederationPlugin({\\n name: \'host\',\\n remotes: {\\n app1: \'app1@http://localhost:3001/remoteEntry.js\',\\n },\\n shared: { react: { singleton: true }, \'react-dom\': { singleton: true } },\\n }),\\n];\\n\\n
name
: Defines the name of the hostremotes
: Specifies the remote application (in this case, app1
) and the location of its entry point (remoteEntry.js
)shared
: Ensures that shared dependencies like React and React DOM use a single version across both applicationsHere’s an example of a remote configuration:
\\nplugins: [\\n new ModuleFederationPlugin({\\n name: \'app1\',\\n filename: \'remoteEntry.js\',\\n exposes: {\\n \'./Button\': \'./src/Button\',\\n },\\n shared: { react: { singleton: true }, \'react-dom\': { singleton: true } },\\n }),\\n],\\n\\n
name
: Names the remote application (app1
)filename
: Specifies the filename where the remote entry point will be availableexposes
: Indicates which modules (like ./Button
) the host can accessshared
: Same as in the host configuration, ensures consistent dependency versionsConsider a React application setup to better understand how Module Federation works. Imagine two separate React projects:
\\nThe traditional approach would be to extract the carousel into an npm package, then refactor the code and publish it to a private or public npm registry. Then, you probably have to install and update this package in both applications whenever there’s a change. You might discover that this process becomes tedious, time-consuming, and often leads to versioning issues.
\\nModule Federation eliminates this hassle. With it, the Home App continues to own and host the carousel component. Then the Search App dynamically imports the carousel at runtime.
\\nHere’s how it works:
\\n// webpack.config.js for Home App\\nmodule.exports = {\\n plugins: [\\n new ModuleFederationPlugin({\\n name: \'home\',\\n filename: \'remoteEntry.js\',\\n exposes: {\\n \'./Carousel\': \'./src/components/Carousel\',\\n },\\n shared: [\'react\', \'react-dom\'], // Share dependencies\\n }),\\n ],\\n};\\n// webpack.config.js for Search App\\nmodule.exports = {\\n plugins: [\\n new ModuleFederationPlugin({\\n name: \'search\',\\n remotes: {\\n home: \'home@http://localhost:3000/remoteEntry.js\',\\n },\\n }),\\n ],\\n};\\n// Search App dynamically imports the Carousel component\\nimport React from \'react\';\\nconst Carousel = React.lazy(() => import(\'home/Carousel\'));\\nexport const App = () => (\\n <React.Suspense fallback={<div>Loading...</div>}>\\n <Carousel />\\n </React.Suspense>\\n);\\n\\n
Module Federation and micro-frontends often go hand in hand, but they are not inherently dependent on each other.
\\nA micro-frontend architecture breaks a monolithic frontend into smaller, independent applications, making development more modular and scalable. Developers can implement micro-frontends using tools like iframes, server-side rendering, or Module Federation.
\\nOn the other hand, Module Federation is a powerful tool for sharing code and dependencies between applications. While it complements micro-frontends well, you can also use it independently in monolithic applications.
\\nAbsolutely. Module Federation isn’t limited to micro-frontends. For instance, it allows you to share a design system across multiple monolithic applications. It can also dynamically load plugins or features in a single-page application (SPA), eliminating the need to rebuild the entire app for updates.
\\n\\nNo, micro-frontends don’t rely exclusively on Module Federation. You can build them using other methods like server-side includes (SSI), custom JavaScript frameworks, or even static bundling.
\\nHowever, Module Federation simplifies code sharing and dependency management, which is why it’s a preferred tool for many developers.
\\nModule Federation plays a key role in reducing code duplication and makes it easier to update shared modules across applications. This efficiency ensures that your applications remain lightweight, maintainable, and up-to-date.
\\nThere are many benefits to using Module Federation when you want to build applications that can easily scale. However, as with any technology, implementing it comes with unique challenges. Let’s explore some of the key issues you might face and how to address them effectively.
\\nMultiple teams often use the same CSS framework (like Tailwind CSS) in micro-frontend architectures. If two micro-frontends use global class names, like button
or primary-btn
, you might experience style overrides or unexpected results.
For example, when the host application applies a button class with a blue background, and the remote application uses a button class with a red background, integrating these applications can cause their styles to override one another. This leads to inconsistent designs that affect the user experience.
\\nTo avoid style conflicts, use Tailwind CSS’s prefix option to ensure all class names in the remote application are unique. This isolates your styles and prevents them from clashing with the host application.
\\nTo implement this, first add a unique prefix in your tailwind.config.js
file:
module.exports = {\\n prefix: \'remote-\', // Adds \'remote-\' to all classes in the remote app\\n};\\n\\n
With this setup, Tailwind CSS prefixes all class names automatically. For example:
\\nbtn-primary
becomes app1-btn-primary
text-lg
becomes app1-text-lg
Then, update your components to use the class names with the new prefixes:
\\nconst MyButton = () => (\\n <button className=\\"remote-btn-primary remote-text-lg\\">\\n Click Me\\n </button>\\n);\\n\\n
This ensures that in the final application, the remote-btn-primary
from the remote app won’t interfere with the similarly named host-btn-primary
in the host application:
Imagine the host application uses React 18.2.0 while the remote application relies on React 17.0.2. This mismatch can result in duplicate React instances, which will break features like useState
, useEffect
, or shared context.
To fix this issue, use webpack’s Module Federation to enforce a single version of shared dependencies:
\\nmodule.exports = {\\n plugins: [\\n new ModuleFederationPlugin({\\n name: \'remoteApp\',\\n filename: \'remoteEntry.js\',\\n remotes: {},\\n exposes: {},\\n shared: {\\n react: {\\n singleton: true,\\n requiredVersion: \'^18.2.0\',\\n },\\n \'react-dom\': {\\n singleton: true,\\n requiredVersion: \'^18.2.0\',\\n },\\n },\\n }),\\n ],\\n};\\n\\n
This ensures all micro-frontends load only one React version.
\\nIf you’re using a monorepo like Nx or Turborepo, enforce consistent versions in your package.json
:
{\\n \\"resolutions\\": {\\n \\"react\\": \\"18.2.0\\",\\n \\"react-dom\\": \\"18.2.0\\"\\n }\\n}\\n\\n
When micro-frontends share state, it often creates challenges. For example, say you have a host application that manages user authentication, and your remote application controls the shopping cart. Synchronizing user data or passing authentication tokens between the two can quickly spiral into a mess.
\\nTo handle this issue, use a centralized state management tool like Redux, RxJS, or Custom Event APIs.
\\nFirst, create a shared Redux store for cross-micro-frontend communication:
\\nimport { configureStore } from \'@reduxjs/toolkit\';\\nconst store = configureStore({\\n reducer: {\\n user: userReducer,\\n cart: cartReducer,\\n },\\n});\\nexport default store;\\n\\n
Then, use window.dispatchEvent
and window.addEventListener
to broadcast events:
// Host App: Emit login event\\nwindow.dispatchEvent(new CustomEvent(\'user-login\', { detail: { userId: \'12345\' } }));\\n// Remote App: Listen for login event\\nwindow.addEventListener(\'user-login\', (event) => {\\n console.log(\'User logged in:\', event.detail.userId);\\n});\\n\\n
Routing conflicts happen when different micro-frontends define identical or overlapping routes. For example, if both the host and a remote application independently create a /settings
route, it can cause unpredictable issues. One route might overwrite the other, or users could end up on the wrong page entirely.
To resolve routing conflicts, use lazy loading and distinct route namespaces. To prevent interference, ensure each micro-frontend manages its routes independently.
\\nLazy loading only fetches routes when they’re needed, keeping the routing clean and conflict-free. Here’s how you can implement it:
\\nconst routes = [\\n { path: \'/host\', loadChildren: () => import(\'host/Routes\') },\\n { path: \'/remote\', loadChildren: () => import(\'remote/Routes\') },\\n];\\n\\n
With this setup, navigating to /host
loads only the routes defined in host/Routes
, while /remote
loads routes from remote/Routes
. This ensures each application stays isolated and avoids conflicts.
You can also use mamespaces to ensure each micro-frontend has distinct route paths, even for pages that have similar names like /settings
.
Here’s an example of namespacing:
\\n/app1/settings
/app2/settings
\\nconst app1Routes = [\\n { path: \'/app1/settings\', component: SettingsComponent },\\n { path: \'/app1/profile\', component: ProfileComponent },\\n];\\nconst app2Routes = [\\n { path: \'/app2/settings\', component: SettingsComponent },\\n { path: \'/app2/notifications\', component: NotificationsComponent },\\n];\\n
When you use prefixed namespaces (/app1/
and /app2/
), you completely avoid route duplication:
Dynamic imports in micro-frontends can cause errors if modules fail to load. For example, if the host application incorrectly sets the path to a federated module, it can lead to 404 errors. Imagine the host trying to load a shared component from the remote application, only to crash because the module’s URL is either incorrect or unavailable.
\\nSet webpack’s publicPath
correctly to ensure dynamic imports always resolve to the right location.
First, set webpack’s output.publicPath
to auto
so it dynamically determines the correct path for your modules like this:
module.exports = {\\n output: {\\n publicPath: \'auto\', // Automatically resolves paths for dynamic imports\\n },\\n};\\n\\n
Once the publicPath
is set, you can dynamically import a federated module in your React application:
import React from \'react\';\\n// Lazy load the remote component\\nconst MyRemoteComponent = React.lazy(() => import(\'app2/MyComponent\'));\\nconst App = () => (\\n <React.Suspense fallback={<div>Loading...</div>}>\\n <MyRemoteComponent />\\n </React.Suspense>\\n);\\nexport default App;\\n\\n
This way, React.lazy
loads MyComponent
from the remote module (app2
). If the module takes time to load, the fallback (e.g., “Loading…”) ensures the UI remains responsive:
Micro-frontends usually need to access common assets like images, fonts, styles, or utility functions. Each micro-frontend might duplicate these resources without a centralized approach, which can lead to bloated bundle sizes, inconsistent branding, and slower page loads.
\\nFor example, say you have a host application that includes a utility function for formatting dates and a custom font for its UI and a remote application that duplicates the same utility and font files. If you load them together, it can become redundant, waste bandwidth, and hurt performance.
\\nTo streamline the handling of shared resources across your applications, you should centralize them and ensure every micro-frontend has consistent access.
\\nTo achieve this, first place shared assets like fonts, stylesheets, or scripts on a CDN or shared server. This ensures all micro-frontends pull from the same source, reducing duplication and improving load performance.
\\nFor example, say you want to host a global stylesheet and utilities. You can add these to your shared resources:
\\n<link rel=\\"stylesheet\\" href=\\"https://cdn.example.com/styles/global.css\\" /> \\n<script src=\\"https://cdn.example.com/utils.js\\"></script> \\n\\n
By leveraging browser caching, updates to these shared resources will automatically reflect across all micro-frontends, improving your app’s performance.
\\nThen, to avoid duplicating code, extract reusable functions or utilities into a shared library and publish it for all micro-frontends to use.
\\nFor example, say you want to create a date utility function in a shared utils/formatDate.js
library:
export const formatDate = (date) => new Intl.DateTimeFormat(\'en-US\').format(date);\\n\\n
Publish the library to a private npm
registry (e.g., Verdaccio):
npm publish --registry https://registry.example.com/ \\n\\n
Then, install and use it in micro-frontends:
\\nnpm install @myorg/utils \\n\\n
Finally, use it in your code:
\\nimport { formatDate } from \'@myorg/utils\'; \\nconsole.log(formatDate(new Date())); // Output: 12/28/2024 \\n\\n
By hosting shared resources on a CDN and using shared libraries, you can reduce redundancy and ensure consistent behavior across all micro-frontends.
\\nFor resources that aren’t always needed, dynamically load them to optimize performance. For example, you can dynamically import a utility like this:
\\nimport(\'https://cdn.example.com/utils.js\').then((utils) => {\\n const formattedDate = utils.formatDate(new Date());\\n console.log(formattedDate);\\n});\\n\\n
This approach will effectively reduce the initial load time for your application because it will only load what is necessary. It also ensures that your app fetches up-to-date resources when needed:
\\nModule Federation is a game changer for managing dependencies and sharing code in your micro-frontend projects. While its integration can be challenging, the best practices we outlined in this guide should help you navigate the most common among them, including styling conflicts, version mismatches, and routing errors.
\\nHappy coding!
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nJavascript has been around for quite some time, first coming out almost 30 years ago. Because of its rich history, it’s gathered quite a bit of functionality over the years.
\\nMore recently, in about 2012, TypeScript attempted to give types to JavaScript. Fortunately, because developers are a relatively unopinionated bunch (and easily agree on everything) there hasn’t ever been much furor on the JavaScript vs. TypeScript debate. Not.
\\nWhether you’re part of the typed club or not, one function within TypeScript that can make life a lot easier is object destructuring. TypeScript object destructuring is a bit of a weird one; sometimes you can listen to what something is and assume a certain functionality.
\\nFor example, you know what a promise is, or an observable. But destructuring? It doesn’t lend itself to a credible idea of what it actually is.
\\nI went literally years without ever using object destructuring, so if you’re not immediately aware of what it is or why you’d want to use it, that’s okay.
\\nHowever, now that I know about it, it makes my life dramatically easier when dealing with operators that take an input and produce other variables. For example, an observable may emit many values that go through a pipe
, and with each operator, it could get harder to work out exactly what variable is where in the subsequent responses.
That specific use case we’ll get into a little bit later. For now, let’s start slow and use simple examples to understand how they work.
\\nConsider this humble array:
\\nlet simpleArray = [1,2,3,4,5];\\n\\n
If we wanted to get the first two values of this array, typically we could use slice
or some other operator. But with destructuring, we can do this instead:
const [a,b] = simpleArray\\n\\n
It feels weird assigning bits of an array to our const
, but we’re actually just popping out the first two values of that array and assigning it to a
and b
. If we wanted to access the remaining values, we could even do something like this:
const [a, b, ...remaining] = simpleArray\\n\\n
That’s a nice way of seeing how destructuring works, but it also seems a bit pointless. If you saw that in a vacuum, you’d likely think it was like slice
but with more steps. You’d be forgiven for thinking that.
As developers, a lot of our complexity can come from things happening at some point in the future. Of course, there is the humble Promise
, or asynchronous object that returns at some point in the future. But before long, we’ve graduated to Observables
and a temporal element is introduced. These things emit over time, baby.
Would we encounter these every day as a developer? Well, if you have a form on your website where people can enter data, and they can view results, what are our sources of events? There’s the user clicking on the search button; that’s true. But what about sorting and filtering the data? It would be nice to do this all out of the one Observable
and not stash our half-sorted array into a temporary variable while we’re working it out.
That’s such a good application for destructuring. Let’s see why.
\\n\\nLet’s start out by subscribing to a sortOrder
variable. This is the name of the header that the data would be sorted by. In our handy-dandy editor, it tells us the type of data that will come from this Observable
, as suggested by the type system:
Unsurprisingly, it’s an Observable<string>
. It’s going to emit over time when someone clicks on the Sort Order button. But, we want to combine this observable with other observables. What happens when we do that?:
Ah that’s not very…. descriptive. Now it’s an Observable
of type string, string
. TypeScript pulls this variable out of thin air as it describes our data shape.
Annoyingly, we’d have to access the values in this by using an index, so we’d have to remember when we assigned each variable in the pipe
operator. For reasonably complex pipe()
chains, before long we’d look like this:
Where this gets really exciting is that, with each subsequent call to combineLatestWith
, we receive our original tuple, with the newest combineLatestWith
value tacked on the end.
Rapidly this descends into chaos as our type information gets truncated due to the sheer length:
\\nHovering over the object to see the type shows us what we’re dealing with:
\\nThis does not spark joy.
\\nIf we had some sort of even more complex system, and we had to listen to many Observables
to produce a result, then it wouldn’t be so hard to imagine a deeply, deeply nested set of tuples and objects. And we’d have to pluck them out by index.
So basically, just surround the whole code block with “NEVER REFACTOR THIS YOU WILL BE FIRED” and move on with your life.
\\nFortunately, destructuring can really help with this proposition.
\\nLet’s dummy up a simple HTML table with a few fields, and sorting:
\\nWe use forms that are at least this complex in the day-to-day, but there’s quite a bit going on here. We want to react instantly when someone presses the Search button, and the Sort button… but we also want to take the latest values from other fields, like the name or profession input boxes.
\\nFortunately, we can use RxJS operators to achieve just this:
\\nthis.tableData$ = this.sortOrder.pipe(\\n combineLatestWith(this.header),\\n combineLatestWith(this.searchButton$),\\n withLatestFrom(this.formGroup.valueChanges)\\n) \\n\\n
But after, we want to use a switch map to terminate any in-flight requests if new requests come through. What’s the signature of the object that is passed into the switchMap
operator though?:
For every new RxJS operator we use, it gets wrapped in a tuple. So, imagine that we have a data source (like an HTTP API for example) that has a signature like this:
\\nfakeAsyncronousDataSource(name: string, profession: string, header: Header | undefined, sortOrder: SortOrder, page: number)\\n\\n
Our entire chain winds up looking like this:
\\nthis.tableData$ = this.sortOrder.pipe(\\n combineLatestWith(this.header),\\n combineLatestWith(this.searchButton$),\\n withLatestFrom(this.formGroup.valueChanges),\\n switchMap(x => {\\n return this.fakeAsyncronousDataSource(x\\\\[1].name ?? \'\', x[1].profession ?? \'\', x[0\\\\][0]\\\\[1], x[0\\\\][0][0], 0)\\n }),\\n startWith(testData.slice(0, 10))\\n)\\n>\\n
Imagine trying to present that straight-faced at a code review.
\\n“Oh and this is the bit where I pull values out of a tuple layered three layers deep and pass it to a function. I do it entirely using index values that I and only I know when I wrote the code”.
\\nAnd then the laughter gives way to long stares as they realize — you’re actually being serious. No no, let’s not be that person.
\\nFortunately, we can “unwrap” the layered tuple by using TypeScript object destructuring. It’s kind of a quick transition, but our RxJS chain now looks like this:
\\nthis.tableData$ = this.sortOrder.pipe(\\n combineLatestWith(this.header),\\n combineLatestWith(this.searchButton$),\\n withLatestFrom(this.formGroup.valueChanges),\\n switchMap(([[[sort, header], _], formData]) => {\\n return this.fakeAsyncronousDataSource(formData.name ?? \'\', formData.profession ?? \'\', header, sort, 0)\\n }),\\n startWith(testData.slice(0, 10))\\n)\\n\\n
This has many benefits.
\\nFirst, and most obviously, we know what the variables are when they are used in the function call. We don’t have to pluck values out by index, so readability improves substantially.
\\nSecondly, we can mentally associate each tuple value in the destructuring to the respective pipe operators before it. So, if we want to add things to the pipe, it’s not a huge inconvenience.
\\nTo be fair, normally it takes quite a long tour-de-force through a language feature to really tease out its benefits. But, in this case, it’s fairly obvious. When you have a complex type that is being produced by something like an observable chain, don’t be afraid to reach for something like destructuring.
\\nIn this article, we learned how to destructure objects in a wide variety of cases. We saw how they could be used in arrays, but also how they could be used in an Observable
chain. However we use them, they help us to write clean, maintainable code.
Feel free to clone the sample on GitHub here. After you have done so, and have started the project, you can also access the simple example on http://localhost:4200/simple, and the complex example on http://localhost:4200/complex, respectively.
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nProgram.cs
\\n Firebase is one of the most popular authentication providers available today. Meanwhile, .NET stands out as a good choice for API development.
\\nCombining these two technologies can result in a secure and high-performing web application. But to achieve this, we want complete implementation. That includes securing our APIs, enabling interactive login functionality, and being able to automatically retrieve access and refresh tokens whenever needed.
\\nWhile many articles address parts of this process, few offer an end-to-end solution, which is what this guide aims to provide. We’ll explore how to integrate Firebase with an ASP.NET application, covering all the essential steps.
\\nHere’s what we need for this project to be a success:
\\nWhen I began this project, my initial goal was to create authentication that relied solely on people logging in from their devices with genuine accounts. Most of the articles I came across focused on username/password logins, which initially seemed superfluous to me. However, here’s the thing — you need to use username/password logins in the early days of testing your app because there’s no other practical way to test logins. We’ll explore this in more detail later.
\\nConfiguring a system like this involves several moving parts. I’ll do my best to cover them in chronological order. This approach may require jumping back and forth between Firebase and the app configuration, but it ensures that we only implement components as they’re needed.
\\nFirst things first, we need to set up a Firebase application. Head over to the Firebase console to create the new application. In my case, I’ll use the very uninspired project name “authtest.”
\\nIn the list, find the Authentication tile:
\\nThen, set up a sign-in method:
\\nIn this case, we’ll configure email/password and Google authentication. Unsurprisingly, Google authentication is one of the easiest authentication methods to set up, so it makes sense to use it out of the gate. However, you may use other authentication providers as you see fit.
\\nBefore diving into the code, it’s helpful to understand what we’re aiming to achieve with this implementation. Firebase authentication, identity management, and OAuth2 are complex topics in and of themselves. Unfortunately, this complexity lends itself to authentication failures or unexpected behavior.
\\nFirebase provides an authentication service so people can prove they are who they say they are. However, it’s important to note that this service does not include data storage or persistence for authentication actions. Firebase can store data, such as in the real-time database that it offers, but to me, it’s not a great choice for this kind of operation.
\\nASP.NET ships with its own identity offering. Typically, this is configured in an application to allow people to create new accounts and set passwords on the app they are currently using. In our case, we want our users to have a user account on our website (so we can receive their email, name, and other details), but we don’t want to carry out the authentication for them. Instead, we want Firebase to authenticate them, and then create an account for them on our website, essentially linking the two together.
\\nIn summary, Firebase will prove that our user owns the account they are logging in to, and then our app will make the appropriate decisions as to whether they can access parts of the app.
\\nCreate a standard Web API project without any authentication or any other bits. We’ll have to add a lot of this manually.
\\n\\nThen we’ll install the following NuGet packages, which provide the functionality required for Firebase authentication to work:
\\ndotnet add package FirebaseAdmin\\ndotnet add package Microsoft.AspNetCore.Authentication.Cookies\\ndotnet add package Microsoft.AspNetCore.Authentication.JwtBearer\\ndotnet add package Microsoft.AspNetCore.Authentication.OAuth\\ndotnet add package Microsoft.AspNetCore.Identity.EntityFrameworkCore\\ndotnet add package Microsoft.AspNetCore.Identity.UI\\ndotnet add package Microsoft.AspNetCore.Identity\\ndotnet add package Microsoft.EntityFrameworkCore.Design\\ndotnet add package Swashbuckle.AspNetCore\\ndotnet add package System.IdentityModel.Tokens.Jwt\\n\\n
As cited at the outset, we want each user in our program to receive an IdentityUser
entry in our app’s database. To achieve this, we can tell the identity system to create a modified entity that contains our relevant Firebase details.
Within Models/Authentication/ApplicationUser.cs
, add the following:
public class ApplicationUser : IdentityUser\\n{\\n public string FirebaseUserId { get; set; }\\n public string Name { get; set; }\\n}\\n\\n
Program.cs
While I won’t paste the entire Program.cs
here (it’s hundreds of lines long!), I’m going to cover the most relevant parts, and you can check out the GitHub repository if you want to see the whole thing.
To begin, we’d like to configure the database for our application. For ease of use, I’m going to use SQLite, but you can use whichever provider suits you:
\\nbuilder.Services.AddDbContext<ApplicationDbContext>(options => options.UseSqlite(\\"Data Source=testauth.db\\"));\\n\\nbuilder.Services.AddDefaultIdentity<ApplicationUser>().AddEntityFrameworkStores<ApplicationDbContext>();\\n\\n
As you’re probably already aware, authentication exchanges usually involve the client receiving a JWT from the server. JWTs aren’t very exciting, they’re just an agreed-upon standard in JSON that contains data relating to the users’ authentication and identity.
\\nThe JWTs are trustworthy because they can be issued to a user and then trusted as authentic. The reason for this is that the JWTs are signed. If the contents don’t match the signature, the JWT has been tampered with, and the server should reject it.
\\nSo, on application startup, we need to fetch Google’s signing keys and trust them when the JWT check is taking place:
\\nvar client = new HttpClient();\\nvar keys = client\\n .GetStringAsync(\\n \\"https://www.googleapis.com/robot/v1/metadata/x509/[email protected]\\")\\n .Result;\\nvar originalKeys = new JsonWebKeySet(keys).GetSigningKeys();\\nvar additionalkeys = client\\n .GetStringAsync(\\n \\"https://www.googleapis.com/service_accounts/v1/jwk/[email protected]\\")\\n .Result;\\nvar morekeys = new JsonWebKeySet(additionalkeys).GetSigningKeys();\\nvar totalkeys = originalKeys.Concat(morekeys);\\n\\n
There are two separate key sources that Google uses for its tokens. In my case, I retrieved the keys from both sources and stuck them together so that when the JWT token is validated, it passes certificate validation.
\\nNext up, we need to configure our application to receive JWTs from Google and trust them as authentic. This occurs within our AddAuthentication
call.
We’ll need our Project ID for this one. To retrieve this, hop back into our Firebase console, and click on the cog next to Project Overview:
\\nOur Project ID should be listed here:
\\nWhile you’re there, copy down the Web API key because we will need it later.
\\nN.B., if you hover over the question mark next to Project ID, it will say that the ID is a convenience alias, or downplay the importance of it. This is a bit of a documentation bug because, without this project ID, nothing would work in your project.
\\nAlso while we’re here, set up a new user for your app. It’s okay to use a nonexistent email, this is just to test username/password authentication:
\\nWithin our Program.cs
, let’s configure our Project ID and accept tokens from Firebase:
var projectId = \\"{YOUR-PROJECT-ID-HERE}\\";\\n\\nbuilder.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)\\n .AddJwtBearer(options =>\\n {\\n options.IncludeErrorDetails = true;\\n options.Authority = $\\"https://securetoken.google.com/{projectId}\\";\\n options.TokenValidationParameters = new TokenValidationParameters\\n {\\n ValidateIssuer = true,\\n ValidIssuer = $\\"https://securetoken.google.com/{projectId}\\",\\n ValidateAudience = true,\\n ValidAudience = projectId,\\n ValidateLifetime = true,\\n ValidateIssuerSigningKey = true,\\n IssuerSigningKeys = totalkeys\\n };\\n\\n
Now our authentication system will be able to validate JWTs that are issued by Firebase. But, when a user is validated, we also want to reference a built-in user account within our app. That way, users can store details or preferences in the app, and this will be correlated to a user identity in our app.
\\nEach time a JWT is validated, we want to check if an entity exists in our database that correlates to the user who has authenticated. Bear in mind that if your app scales and becomes gigantic, incurring a database hit every time a token is validated will likely introduce some performance concerns. But for our small sample app, it’s an acceptable trade-off.
\\nAbove, where we referenced our options
object, we want to do so again, to register against the OnTokenValidated
event:
options.Events = new JwtBearerEvents\\n{\\n OnTokenValidated = async context =>\\n {\\n // Receive the JWT token that firebase has provided\\n var firebaseToken = context.SecurityToken as Microsoft.IdentityModel.JsonWebTokens.JsonWebToken;\\n // Get the Firebase UID of this user\\n var firebaseUid = firebaseToken?.Claims.FirstOrDefault(c => c.Type == \\"user_id\\")?.Value;\\n if (!string.IsNullOrEmpty(firebaseUid))\\n {\\n // Use the Firebase UID to find or create the user in your Identity system\\n var userManager = context.HttpContext.RequestServices\\n .GetRequiredService<UserManager<ApplicationUser>>();\\n\\n var user = await userManager.FindByNameAsync(firebaseUid);\\n\\n if (user == null)\\n {\\n // Create a new ApplicationUser in your database if the user doesn\'t exist\\n user = new ApplicationUser\\n {\\n UserName = firebaseUid,\\n Email = firebaseToken.Claims.FirstOrDefault(c => c.Type == \\"email\\")?.Value,\\n FirebaseUserId = firebaseUid,\\n Name = firebaseToken.Claims.FirstOrDefault(c => c.Type == \\"name\\")?.Value ??\\n $\\"Planner {firebaseUid}\\",\\n };\\n await userManager.CreateAsync(user);\\n }\\n }\\n }\\n};\\n\\n
Because this occurs each time the JWT is validated, we can be sure that the users’ accounts will be there after they have logged in.
\\nWe can set up authentication to our heart’s desire, but without an easy way to test it, what are we here for? You’re not going to manually craft login requests in cURL to see if the login function works, are you?
\\nFortunately, Swagger provides a configurable way to set up authentication schemes for our apps. We can tell Swagger to send our details to a specific endpoint and receive a token for use in authentication:
\\nbuilder.Services.AddSwaggerGen(options =>\\n{\\n options.AddSecurityDefinition(JwtBearerDefaults.AuthenticationScheme, new OpenApiSecurityScheme\\n {\\n Type = SecuritySchemeType.OAuth2,\\n\\n Flows = new OpenApiOAuthFlows\\n {\\n Password = new OpenApiOAuthFlow\\n {\\n TokenUrl = new Uri(\\"/v1/auth\\", UriKind.Relative),\\n Extensions = new Dictionary<string, IOpenApiExtension>\\n {\\n { \\"returnSecureToken\\", new OpenApiBoolean(true) },\\n },\\n }\\n }\\n });\\n\\n options.AddSecurityRequirement(new OpenApiSecurityRequirement\\n {\\n {\\n new OpenApiSecurityScheme\\n {\\n Reference = new OpenApiReference\\n {\\n Type = ReferenceType.SecurityScheme,\\n Id = JwtBearerDefaults.AuthenticationScheme\\n },\\n Scheme = \\"oauth2\\",\\n Name = JwtBearerDefaults.AuthenticationScheme,\\n In = ParameterLocation.Header,\\n },\\n new List<string> { \\"openid\\", \\"email\\", \\"profile\\" }\\n }\\n });\\n});\\n\\n
Note that we reference a /v1/auth
controller action here. This is the controller action that sends our credentials to Firebase and brings back the appropriate token for our user.
Swagger will submit details to this controller action, which will send our users’ details to Firebase, which will return the user’s token. We’ll need to use our Web API Key, which we noted earlier.
\\nEssentially, we are exchanging the username and password for a JWT token:
\\n[ApiController]\\n[Route(\\"v1/[controller]\\")]\\npublic class AuthController : Controller\\n{\\n [HttpPost]\\n public async Task<ActionResult> GetToken([FromForm] LoginInfo loginInfo)\\n {\\n string uri =\\n \\"https://www.googleapis.com/identitytoolkit/v3/relyingparty/verifyPassword?key={WEB-API-KEY\\"};\\n using (HttpClient client = new HttpClient())\\n {\\n FireBaseLoginInfo fireBaseLoginInfo = new FireBaseLoginInfo\\n {\\n Email = loginInfo.Username,\\n Password = loginInfo.Password\\n };\\n var result = await client.PostAsJsonAsync(uri, fireBaseLoginInfo,\\n new JsonSerializerOptions()\\n { WriteIndented = true, PropertyNamingPolicy = JsonNamingPolicy.CamelCase });\\n\\n var encoded = await result.Content.ReadFromJsonAsync<GoogleToken>();\\n Token token = new Token\\n {\\n token_type = \\"Bearer\\",\\n access_token = encoded.idToken,\\n id_token = encoded.idToken,\\n expires_in = int.Parse(encoded.expiresIn),\\n refresh_token = encoded.refreshToken\\n };\\n return Ok(token);\\n }\\n }\\n}\\n\\n
We should also add a UserController
to test our authentication:
[ApiController]\\n[Authorize]\\n[Route(\\"[controller]/[action]\\")]\\npublic class UserController\\n{\\n private readonly IHttpContextAccessor _httpContextAccessor;\\n private readonly UserManager<ApplicationUser> _userManager;\\n\\n public UserController(IHttpContextAccessor httpContextAccessor, UserManager<ApplicationUser> userManager)\\n {\\n _httpContextAccessor = httpContextAccessor;\\n _userManager = userManager;\\n }\\n\\n [HttpGet]\\n public async Task<LoginDetail> GetAuthenticatedUserDetail()\\n {\\n var claimsPrincipal = _httpContextAccessor.HttpContext.User;\\n var firebaseId = claimsPrincipal.Claims.First(x => x.Type == \\"user_id\\").Value;\\n var email = claimsPrincipal.Claims.First(x => x.Type == ClaimTypes.Email).Value;\\n\\n return new()\\n {\\n FirebaseId = firebaseId,\\n Email = claimsPrincipal.Claims.First(x => x.Type == ClaimTypes.Email).Value,\\n AspNetIdentityId = _userManager.FindByEmailAsync(email).Result.Id,\\n RespondedAt = DateTime.Now\\n };\\n }\\n}\\n\\n
Because we’ve configured our identity context, now would be a good time to create our first migration and set up our database so it’s available when the new user account is added.
\\nWithin the context of our project, run the following commands:
\\ndotnet ef migrations add initial\\ndotnet ef database update\\n\\n
After this, our testauth.db
should be created in the application root.
Now, it’s time to fire up our app! Go ahead and start the application. A browser should open and you should see the Swagger UI. Click Authorize:
\\nThen, log in with the details you have set up:
\\nAfterward, you should be told that you are now authorized with OAuth2 🎉
\\nNow, if you execute the “GetAuthenticatedUserDetail” method, your authenticated API action should run successfully:
\\nIf you open the SQLite database that our app has created, you should be able to see the created user in the Users table:
\\nUp to this point, we’ve been running our project in the Swagger documentation. Now it’s time to actually use it in a real app. In order to see how this would work in a client app, let’s set up an Angular app that uses this authentication. You can use any web frontend tool of your choice.
\\nStart by setting up a new Angular app and installing the firebaseui-angular package. Keep in mind that this package requires some configuration, so be sure to follow the setup instructions carefully.
\\nIn the main.ts
file where we register our app with Firebase, we’ll configure the package to connect to Firebase as follows:
import {ApplicationConfig, importProvidersFrom, provideZoneChangeDetection} from \'@angular/core\';\\nimport {provideRouter} from \'@angular/router\';\\n\\nimport {routes} from \'./app.routes\';\\n// import firebaseui from \\"firebaseui\\";\\nimport {BrowserModule} from \\"@angular/platform-browser\\";\\nimport {FormsModule} from \\"@angular/forms\\";\\nimport {AngularFireAuthModule} from \\"@angular/fire/compat/auth\\";\\nimport {AngularFireModule} from \\"@angular/fire/compat\\";\\nimport {firebase, firebaseui, FirebaseUIModule} from \'firebaseui-angular\';\\nimport {provideHttpClient} from \\"@angular/common/http\\";\\n\\nconst firebaseConfig = {\\n apiKey: \\"{{YOUR-API-KEY}}\\",\\n authDomain: \\"{{YOUR-APP-ID}}.firebaseapp.com\\",\\n projectId: \\"{{YOUR-APP-ID}}\\",\\n storageBucket: \\"{{YOUR-APP-ID}}.firebasestorage.app\\",\\n messagingSenderId: \\"***\\",\\n appId: \\"***\\"\\n};\\n\\nconst firebaseUiAuthConfig: firebaseui.auth.Config = {\\n signInFlow: \'popup\',\\n signInOptions: [\\n firebase.auth.GoogleAuthProvider.PROVIDER_ID\\n ],\\n tosUrl: \'www.tos.com\',\\n privacyPolicyUrl: \'www.privacy.com\',\\n credentialHelper: firebaseui.auth.CredentialHelper.GOOGLE_YOLO\\n};\\n\\nexport const appConfig: ApplicationConfig = {\\n providers: [\\n provideZoneChangeDetection({eventCoalescing: true}),\\n provideRouter(routes),\\n provideHttpClient(),\\n\\n importProvidersFrom(BrowserModule,\\n FormsModule,\\n FirebaseUIModule.forRoot(firebaseUiAuthConfig),\\n AngularFireModule.initializeApp(firebaseConfig),),\\n AngularFireAuthModule,\\n ],\\n};\\n\\n
Now it’s time to set up our login screen, authenticate, and use our protected resource.
\\nWe’re going to prepare a very basic login screen for this app. Fortunately, this is easy to achieve using the firebaseui package. All we have to do is add a <firebase-ui>
element on our page and our login options should appear:
Within our component initialization, we want to listen to the authentication state on the app. If a user has logged in, we want to store their ID token for use with our API:
\\nngOnInit(): void {\\n this.user = this.firebase.authState;\\n this.firebase.authState.subscribe(async x => {\\n this.token = await x?.getIdToken()\\n });\\n}\\n\\n
We also have a function that uses the protected resource, which stashes our token into the header and makes the call to the API. In response, it sets that data to the variable in the component:
\\nasync useProtectedResource() {\\n const headers = new HttpHeaders({\\n Authorization: `Bearer ${this.token}`\\n });\\n const response = await firstValueFrom(this.http.get<string>(this.apiUrl, {headers}));\\n this.protectedResponseData = response;\\n}\\n\\n
Finally, we can update our app.component.html
with the details we need to make this work. Basically, we’re just showing the value of these variables so we can see that the login has worked:
Now, if we click on Make authenticated request, our app will use the supplied bearer token to authenticate against our API:
\\nIf we click the button again, the time will update. And that’s it, now our app uses our token to authenticate against the API!
\\nFor the sample code, check out the GitHub repository. And don’t forget to update the API key details in all of the relevant areas before using it!
\\nThis article provided an end-to-end guide on integrating Firebase authentication with an ASP.NET 8.0 application. We demonstrated how to secure APIs, configure JWT authentication, and set up the client side in a demo Angular frontend.
\\nBy following these steps, you should be able to create a seamless authentication system that enables secure and interactive API usage.
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nEditor’s note: This post was updated by Carlos Mucuho in January 2025 to restructure and simplify the tutorial and focus solely on using React-Bootstrap.
\\nIn this tutorial, we’ll walk through the process of adding React-Bootstrap to your React application, explore how to use it to create a simple component, and then build a fully responsive layout.
\\nReact is the most-used JavaScript framework for building web applications, and Bootstrap is the most popular CSS framework, powering millions of websites on the internet.
\\nReact-Bootstrap is a complete rebuild of Bootstrap components using React, eliminating the need for Bootstrap’s JavaScript and jQuery dependencies. Instead of manipulating the DOM directly, you work with React components that maintain Bootstrap’s functionality and styling.
\\nIf you’re just getting started with these frameworks, I’d suggest skimming through the official React and React-Bootstrap documentation.
\\nThe most straightforward way to add React-Bootstrap to your React app is by installing it as a dependency:
\\nnpm install react-bootstrap bootstrap\\n\\n
Note that React-Bootstrap doesn’t come pre-installed with Bootstrap itself. The package only exports common Bootstrap classes as React components, which is why we also install Bootstrap as a dependency.
\\nOnce the installation is complete, we need to include Bootstrap’s CSS file in our app’s entry file:
\\n// Import Bootstrap CSS\\nimport \'bootstrap/dist/css/bootstrap.min.css\';\\n\\n
In the case of a project built with Vite, that would be in the src/main.jsx
file.
React-Bootstrap components can be imported individually from the package. For example, importing the Button
component would look like this:
// ✅ Best approach - import individual components\\nimport Button from \'react-bootstrap/Button\';\\n\\n// ❌ Not recommended - importing from the main entry point\\nimport { Button } from \'react-bootstrap\';\\n\\n
The first approach is recommended over the second because when you import components individually, your bundler only includes the specific components you’re actually using in your final application bundle. This means less code gets sent to your users’ browsers, resulting in faster load times and better performance, especially as your project grows and you add more React-Bootstrap components.
\\nBootstrap can be used directly on React-Bootstrap components in your React app by applying the built-in classes. To demonstrate the use of Bootstrap classes and React-Bootstrap components, let’s create a basic theme switcher React component:
\\nAs shown in this demo, we’re using a React-Bootstrap dropdown component to implement our theme switcher. We are also using Bootstrap classes to set the size and color of the dropdown button.
\\nNow, let’s write the code for our theme switcher component.
\\nEnsure you have a React app already set up. In your src
folder, create a new file called ThemeSwitcher.jsx
for the component and add the following code snippet to it:
import { useState } from \\"react\\";\\nimport Dropdown from \'react-bootstrap/Dropdown\';\\nimport ButtonGroup from \'react-bootstrap/ButtonGroup\';\\nimport Button from \'react-bootstrap/Button\';\\n\\nconst ThemeSwitcher = () => {\\n const [theme, setTheme] = useState(null);\\n const resetTheme = () => {\\n setTheme(null);\\n };\\n\\n return (\\n <div>\\n <div\\n className={`text-capitalize h1 mb-4 w-100 text-center text-${theme}`}\\n >\\n {`${theme || \\"Default\\"} Theme`}\\n </div>\\n <Dropdown as={ButtonGroup} size=\\"lg\\">\\n <Button\\n className=\\"text-capitalize\\"\\n variant={theme ? theme : \\"secondary\\"}\\n >\\n {theme ? theme : \\"Default\\"}\\n </Button>\\n <Dropdown.Toggle\\n split\\n variant={theme ? theme : \\"secondary\\"}\\n />\\n <Dropdown.Menu>\\n <Dropdown.Item onClick={() => setTheme(\\"primary\\")}>\\n Primary\\n </Dropdown.Item>\\n <Dropdown.Item onClick={() => setTheme(\\"danger\\")}>\\n Danger\\n </Dropdown.Item>\\n <Dropdown.Item onClick={() => setTheme(\\"success\\")}>\\n Success\\n </Dropdown.Item>\\n <Dropdown.Divider />\\n <Dropdown.Item onClick={resetTheme}>\\n Default Theme\\n </Dropdown.Item>\\n </Dropdown.Menu>\\n </Dropdown>\\n </div>\\n );\\n}\\nexport default ThemeSwitcher;\\n\\n
In the code above, we created a very simple theme switcher component using React-Bootstrap’s dropdown component and a few built-in classes.
\\nUsing React’s useState
Hook, we created a state theme
and set its initial value to null
. We also defined the setTheme
method to modify this state. Then we created a resetTheme
function that resets the theme’s value to null
.
Next, in our component markup, we rendered a React-Bootstrap dropdown with four dropdown items. The first three items allow us to switch between different themes: primary
, danger
, and success
. The last dropdown item allows us to reset the theme value to null
using the resetTheme()
function.
Finally, replace the boilerplate code in the App.jsx
file with the following to display the ThemeSwitcher
component:
import \'./App.css\'\\nimport ThemeSwitcher from \'./ThemeSwitcher\'\\nfunction App() {\\n return (\\n <>\\n <ThemeSwitcher />\\n </>\\n )\\n}\\nexport default App\\n\\n
In this example, we see how easy it is to use React-Bootstrap’s components with Bootstrap classes in our React app.
\\nNow that we have our basic theme switcher, let’s try to use as many Bootstrap classes and React-Boostrap components as possible to add more details to our app.
\\nLet’s start by creating a new app with Vite:
\\nnpm create vite@latest detailed-app -- --template react\\n\\n
Next, install the dependencies as follows:
\\nnpm install axios react-bootstrap bootstrap\\n\\n
Notice that we installed Axios as a dependency. Axios is a promise-based HTTP client for the browser and Node.js. It will enable us to fetch posts from the Bacon Ipsum JSON API.
\\nLet’s make a little modification to the src/main.jsx
file to include the Bootstrap minified CSS file. It should look like the following snippet:
import { StrictMode } from \'react\'\\nimport { createRoot } from \'react-dom/client\'\\nimport \'./index.css\'\\nimport App from \'./App.jsx\'\\n\\n// Add Bootstrap minified CSS\\nimport \'bootstrap/dist/css/bootstrap.min.css\';\\n\\ncreateRoot(document.getElementById(\'root\')).render(\\n <StrictMode>\\n <App />\\n </StrictMode>,\\n)\\n\\n
Next, we’ll create a new directory named components
inside the src
directory of our project. In this new components
directory, create a file called Header.jsx
and update it with the following contents:
import logo from \'../assets/react.svg\'\\nimport Container from \'react-bootstrap/Container\';\\nimport Row from \'react-bootstrap/Row\';\\nimport Col from \'react-bootstrap/Col\';\\nimport Form from \'react-bootstrap/Form\';\\nimport Button from \'react-bootstrap/Button\';\\nimport Navbar from \'react-bootstrap/Navbar\';\\nimport Nav from \'react-bootstrap/Nav\';\\nimport NavDropdown from \'react-bootstrap/NavDropdown\'\\nconst AVATAR = \'https://www.gravatar.com/avatar/429e504af19fc3e1cfa5c4326ef3394c?s=240&d=mm&r=pg\';\\n\\nconst Header = () => (\\n <Navbar collapseOnSelect fixed=\\"top\\" bg=\\"light\\" expand=\\"lg\\">\\n <Container >\\n <Navbar.Brand href=\\"#home\\">\\n <img\\n src={AVATAR}\\n width=\\"36\\"\\n className=\\"img-fluid rounded-circle\\"\\n alt=\\"Avatar Bootstrap logo\\"\\n />\\n </Navbar.Brand>\\n <Navbar.Toggle aria-controls=\\"basic-navbar-nav\\" />\\n <Navbar.Collapse id=\\"basic-navbar-nav\\">\\n <Nav className=\\"me-auto\\">\\n <Nav.Link href=\\"#home\\">Home</Nav.Link>\\n <Nav.Link href=\\"#home\\">Events</Nav.Link>\\n <NavDropdown title=\\"Learn\\" id=\\"basic-nav-dropdown\\">\\n <NavDropdown.Item className=\'font-weight-bold text-uppercase\' disabled>Action</NavDropdown.Item>\\n <NavDropdown.Divider />\\n <NavDropdown.Item >Documentation</NavDropdown.Item>\\n <NavDropdown.Item>Tutorials</NavDropdown.Item>\\n <NavDropdown.Item>Courses</NavDropdown.Item>\\n </NavDropdown>\\n </Nav>\\n <Nav className=\'me-auto d-flex align-items-center\'>\\n <img\\n src={logo}\\n width=\\"50\\"\\n className=\\"img-fluid\\"\\n alt=\\"React Bootstrap logo\\"\\n />\\n </Nav>\\n <Nav className=\'d-flex align-items-center\'>\\n <Form >\\n <Row>\\n <Col xs=\\"auto\\">\\n <Form.Control\\n type=\\"text\\"\\n placeholder=\\"Search React Courses\\"\\n className=\\" mr-sm-2\\"\\n />\\n </Col>\\n <Col xs=\\"auto\\">\\n <Button variant=\\"outline-primary\\" type=\\"submit\\">Search</Button>\\n </Col>\\n </Row>\\n </Form>\\n </Nav>\\n </Navbar.Collapse>\\n </Container>\\n </Navbar>\\n);\\nexport default Header;\\n\\n
The component we just created in the snippet above is the Header
component, which contains the navigation menu. Next, we will create a new file named SideCard.jsx
— also in the components
directory — with the following contents:
import Button from \\"react-bootstrap/Button\\";\\nimport Alert from \'react-bootstrap/Alert\';\\nimport Card from \\"react-bootstrap/Card\\";\\nimport CardImg from \\"react-bootstrap/CardImg\\";\\nimport CardBody from \\"react-bootstrap/CardBody\\";\\nimport CardTitle from \\"react-bootstrap/CardTitle\\";\\nimport CardSubtitle from \\"react-bootstrap/CardSubtitle\\";\\nimport CardText from \\"react-bootstrap/CardText\\";\\n\\nconst BANNER = \\"https://i.imgur.com/CaKdFMq.jpg\\";\\nconst SideCard = () => (\\n <>\\n <div className=\\"mt-4 mt-md-0\\">\\n <Alert variant=\\"danger\\" className=\\"d-none d-lg-block\\">\\n <strong>Account not activated.</strong>\\n </Alert>\\n <Card>\\n <CardImg className=\\"img-fluid\\" width=\\"10%\\" src={BANNER} alt=\\"banner\\" />\\n <CardBody>\\n <CardTitle className=\\"h3 pt-2 font-weight-bold text-secondary\\">\\n Glad Chinda\\n </CardTitle>\\n <CardSubtitle\\n className=\\"text-secondary font-weight-light text-uppercase\\"\\n style={{ fontSize: \\"0.6rem\\" }}\\n >\\n Web Developer, Lagos\\n </CardSubtitle>\\n <CardText\\n className=\\"text-secondary mt-2\\"\\n style={{ fontSize: \\"0.75rem\\" }}\\n >\\n Full-stack web developer learning new hacks one day at a time. Web\\n technology enthusiast. Hacking stuffs @theflutterwave.\\n </CardText>\\n <Button variant=\\"success\\" className=\\"font-weight-bold\\">\\n View Profile\\n </Button>\\n </CardBody>\\n </Card>\\n </div>\\n </>\\n);\\nexport default SideCard;\\n\\n
Once that’s done, create a file named Post.jsx
in the components
directory and add the following code snippet to it:
import { useState, useEffect } from \\"react\\";\\nimport axios from \\"axios\\";\\nimport Badge from \\"react-bootstrap/Badge\\";\\n\\nconst Post = () => {\\n const [post, setPost] = useState(null);\\n useEffect(() => {\\n axios\\n .get(\\n \\"https://baconipsum.com/api/?type=meat-and-filler¶s=4&format=text\\"\\n )\\n .then((response) => setPost(response.data));\\n }, []);\\n return (\\n <>\\n {post && (\\n <div>\\n <div className=\\"text-uppercase text-info font-weight-bold\\">\\n Editor\'s Pick\\n <span className=\\"ms-2 text-uppercase text-info font-weight-bold\\">\\n <Badge\\n bg=\\"success\\"\\n className=\\"text-uppercase px-2 py-1 ml-6 mb-1 align-middle\\"\\n style={{ fontSize: \\"0.75rem\\" }}\\n >\\n New\\n </Badge>\\n </span>\\n </div>\\n\\n <span className=\\"d-block pb-4 h2 text-dark border-bottom border-gray\\">\\n Getting Started with React\\n </span>\\n <article\\n className=\\"pt-4 text-secondary text-start\\"\\n style={{ fontSize: \\"0.9rem\\", whiteSpace: \\"pre-line\\" }}\\n >\\n {post}\\n </article>\\n </div >\\n )}\\n </>\\n );\\n};\\nexport default Post;\\n\\n
In the code above, we created a Post
component that renders a post on the page. We initialized the component’s state by setting the post property to null
.
After the component was mounted, we used the useEffect
Hook and Axios to retrieve a random post of four paragraphs from the Bacon Ipsum JSON API, and changed our post field to the data returned from this API.
Finally, modify the src/App.jsx
file to look like the following snippet:
import \'./App.css\'\\nimport Container from \'react-bootstrap/Container\';\\nimport Row from \'react-bootstrap/Row\';\\nimport Col from \'react-bootstrap/Col\';\\nimport Header from \'./components/Header\'\\nimport SideCard from \'./components/SideCard\';\\nimport Post from \'./components/Post\';\\n\\nfunction App() {\\n return (\\n <>\\n <Header />\\n <main className=\\"my-5 mx-0\\">\\n <Container >\\n <Row>\\n <Col\\n xs={12}\\n md={4}\\n >\\n <SideCard />\\n </Col>\\n <Col\\n xs={12}\\n md={8}\\n className=\'ps-md-4\'>\\n <Post />\\n </Col>\\n </Row>\\n </Container>\\n </main>\\n </>\\n )\\n}\\n\\nexport default App\\n\\n
In the code above, we simply included the Header
, SideCard
, and Post
components in the App
component. Notice how we used a couple of responsive utility classes provided by Bootstrap to adapt our app to different screen sizes.
If you run the app now with the command npm
run dev
, your app should start on port 5173
and look like this:
In the previous section, we employed a set of utility classes to adapt our app to different screen sizes. These utility classes are parts of the Bootstrap grid system, which is a utility that allows us to create responsive and adaptable layouts. It is based on a 12-column flexbox grid, which can be customized to create layouts of varying complexity:
\\nBootstrap uses a series of containers
, rows
, and columns
elements that work together to align content on different screen sizes.
Container
InBootstrap, the Container
element is essential for grid layouts because it houses other grid elements. Bootstrap offers two containers: a Container
with a fixed, centered width for standard layouts, and a Container fluid
for full-width layouts.
Row
The Row
element, used within the Container
element, forms horizontal containers for columns, ensuring proper alignment and equal height.
Column
The Column
element is the primary building block of the grid system. It is placed inside rows and defines how much horizontal space each item occupies. The columns are designated by the col-
class, which is followed by a number from one to 12. For example, col-6
will create a column that spans half the width of its parent row:
Bootstrap also provides responsive breakpoint classes that allow you to control the layout of columns at different screen sizes. These classes are typically applied alongside the col-
classes:
col-sm-: Applies to small screens, with a minimum width of 576px\\ncol-md-: Applies to medium screens, with a minimum width of 768px\\ncol-lg-: Applies to large screens, with a minimum width of 992px\\ncol-xl-: Applies to extra-large screens, with a minimum width of 1200px\\n>\\n
Here’s a simple example of a Bootstrap grid layout:
\\n<div class=\\"container\\">\\n <div class=\\"row\\">\\n <div class=\\"col-md-4\\">Column 1</div>\\n <div class=\\"col-md-4\\">Column 2</div>\\n <div class=\\"col-md-4\\">Column 3</div>\\n </div>\\n</div>\\n\\n
In this example, we have a container that holds a row with three columns. On medium-sized screens and larger, each column occupies four out of 12 available columns, creating a three-column layout:
\\nAlternatively, Bootstrap offers an auto-layout feature that enables users to create responsive layouts without specifying the exact widths of columns. In an auto-layout, columns with a col
class will automatically size themselves to be equal in width within the same row. This means that if you have three columns with col
classes inside a row, each will take up an equal portion of the available width:
<div class=\\"container\\">\\n <div class=\\"row\\">\\n <div class=\\"col\\">Column 1</div>\\n <div class=\\"col\\">Column 2</div>\\n <div class=\\"col\\">Column 3</div>\\n </div>\\n</div>\\n\\n
In this example, the output reflects the previous one: all three columns are of equal width, each occupying one-third of the row:
\\nUpon closer inspection of the App.jsx
code, you’ll notice that we used a slightly different syntax when creating our columns. This is because the col
element in React is a component from the React-Bootstrap library that receives its responsive classes as props:
<Container >\\n <Row>\\n <Col\\n xs={{ size: 12, order: 2 }}\\n md={{ size: 4, order: 1 }}\\n >\\n <SideCard />\\n </Col>\\n <Col\\n xs={{ size: 12, order: 1 }}\\n md={{ size: 8, order: 2 }}\\n className=\'ps-md-4\'>\\n <Post />\\n </Col>\\n </Row>\\n</Container>\\n\\n
In this example, we used the Container
, Row
, and Col
components from the React-Bootstrap library to structure our grid. We also specified the column widths using the xs
(extra small) and md
(medium) props.
There are 12 columns in this grid. On md
(medium-sized) and larger screens, the column containing the SideCard
component takes up 4 columns, and the column containing the Post
takes 8 columns:
On extra small screens, each column spans the entire row, but with an ordering of 2
and 1
, respectively. This means that the first column appears after the second column on extra small screens, and the second one appears before the first.
Finally, this is how our responsive grid layout will look on different screens:
\\nOne of the most notable drawbacks of using Bootstrap is that every app created with it tends to look the same. However, Bootstrap allows us to customize the appearance and feel of our app by overriding its default style and creating custom styling using a preprocessor like Sass.
\\nBootstrap provides access to several Sass variables that define its default styling attributes, such as colors, typography, spacing, and more. These variables can be overridden or customized to create a unique look and feel for our application. For example, we can change the primary color and font like this:
\\n$primary-color: #ff0000; // Change the primary color to red\\n$font-family-sans-serif: \'Helvetica Neue\', Arial, sans-serif; // Change the default font\\n\\n
To get started with Sass, install the compiler with the following command:
\\nnpm install -D sass-embedded\\n\\n
The package above will allow us to see the changes we make to Saas files in real time.
\\nThe Bootstrap team advises against modifying the core files, so we need to create a custom Sass stylesheet that imports Bootstrap. Therefore, the next step is to create a custom.scss
file in the src
directory of our project and import Bootstrap’s source files:
// Include all of Bootstrap\\n@import \\"../node_modules/bootstrap/scss/bootstrap\\";\\n\\n
The Bootstrap file we are importing resides within the /node_modules/bootstrap
directory, which is the directory housing the core Bootstrap files. Within this directory, you’ll find three subfolders: dist
, js
, and scss
.
The dist
folder contains all the compiled Sass files in CSS format, the js
folder contains all of Bootstrap’s JavaScript files, and the scss
folder contains the Sass files with the default styles.
After creating the custom scss
file, your project’s src
directory file structure should look like the following:
├── scrc\\n│ ├── assets\\n│ ├── components\\n│ ├── App.css\\n│ ├── App.jsx\\n│ ├── custom.scss\\n│ ├── index.css\\n│ └── main.jsx\\n\\n
With the setup in place, we can now begin modifying our Bootstrap styles. But first, we need to understand how Bootstrap styles are arranged and how to define custom variables.
\\nBootstrap allows users to override Sass properties such as variables, functions, maps, etc. However, there is a specific order that must be followed when modifying these properties in a custom Sass file.
\\nFor example, custom variables must be defined before the import statements in the file. So, if we were to put the example from earlier into our custom.scss
file, it would be arranged as follows:
$primary-color: #ff0000; \\n$success: #ff0000;\\n$font-family-sans-serif: \'Helvetica Neue\', Arial, sans-serif;\\n\\n@import \'../node_modules/bootstrap/scss/bootstrap\';\\n\\n
Every Sass variable in Bootstrap is prefixed with a !default
flag, which allows us to override the value without modifying the source code.
The example above changes the theme’s primary color and font. The list of Bootstrap’s variables can be found in the
\\n../node_modules/bootstrap/scss/variables.scss
directory of your project. Edit the variables in the custom.scss
file accordingly to change the theme’s appearance.
Next, in the main.jsx
file import the custom.scss
file below the line where you imported the minified Bootstrap CSS like so:
import { StrictMode } from \'react\'\\nimport { createRoot } from \'react-dom/client\'\\nimport \'./index.css\'\\nimport App from \'./App.jsx\'\\n// Add Bootstrap minified CSS\\nimport \'bootstrap/dist/css/bootstrap.min.css\';\\n// add custom Saas file\\nimport \'./custom.scss\';\\n\\ncreateRoot(document.getElementById(\'root\')).render(\\n <StrictMode>\\n <App />\\n </StrictMode>,\\n)\\n\\n
The success
variable that we modified on the custom.scss
file is used to define the color of both the button
and badge
components on the page. If you return to your web browser, you will see that they have now taken on the red color specified in our custom Sass file:
With this knowledge, you can tailor your application’s design to your liking without any limitations. Please refer to the documentation for more information on Bootstrap customization with Sass.
\\nIn this tutorial, we explored how to integrate React-Bootstrap into React applications, showing both basic and advanced usage patterns. We learned how to build components using React-Bootstrap’s extensive component library and demonstrated how to create responsive layouts using its grid system.
\\nWe have only used a few React-Bootstrap components in this tutorial, including alerts, badges, buttons, cards, navbars, navs, forms, and containers. There are many additional React-Bootstrap components you can experiment with, such as modals, tooltips, carousels, accordions, toasts, spinners, pagination, and more.
\\nCheck out the official React-Bootstrap documentation to find out more ways the library can be used.
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nuseState
: A hydration-friendly solution\\n Effective state management is crucial for maintaining the consistent and reliable data flow within an application.
\\nNuxt 3 provides the useState
composable as a convenient out-of-the-box solution for state management. For Nuxt developers, mastering useState
and the hydration process can help optimize performance and scalability.
In this article, we will delve deep into these concepts, demonstrating how useState
can effectively replace ref
in many scenarios and prevent Nuxt hydration mismatches that can lead to unexpected behavior and errors.
Nuxt offers two primary rendering modes:
\\nUniversal mode is the default mode for Nuxt and enables server-side rendering (SSR). In Nuxt, SSR involves rendering a web page’s initial HTML on the server, sending it to the client, and then adding event listeners and states to make it interactive.
\\nClient-side rendering (CSR) mode renders the entire page on the client side. The browser downloads and executes JavaScript code, generating the HTML elements.
\\nIn this simple example, the Nuxt page displays a list of stock symbols and their corresponding prices:
\\n<script setup lang=\\"ts\\">\\n import { ref } from \'vue\'\\n const stocks = ref([\\n { symbol: \'AAPL\', price: 150 },\\n { symbol: \'GOOGL\', price: 2000 },\\n { symbol: \'AMZN\', price: 3500 },\\n ])\\n</script>\\n<template>\\n <div>\\n <h2>Stock Prices</h2>\\n <ul>\\n <li v-for=\\"stock in stocks\\" :key=\\"stock.symbol\\">\\n {{ stock.symbol }}: {{ stock.price }}\\n </li>\\n </ul>\\n </div>\\n</template>\\n\\n
First, let’s have a look at how the page will be loaded in CSR mode.
\\nIn CSR mode, the initial HTML is sent to the browser, and the client-side JavaScript takes over to render the application. The initial HTML will look like this:
\\n<html data-capo=\\"\\">\\n<head>\\n <meta charset=\\"utf-8\\">\\n <meta name=\\"viewport\\" content=\\"width=device-width, initial-scale=1\\">\\n ...\\n</head>\\n\\n<body>\\n <div id=\\"__nuxt\\"></div>\\n <div id=\\"teleports\\"></div>\\n <script type=\\"application/json\\" data-nuxt-logs=\\"nuxt-app\\">[[]]</script>\\n <script type=\\"application/json\\" data-nuxt-data=\\"nuxt-app\\" data-ssr=\\"false\\"\\n id=\\"__NUXT_DATA__\\">[{\\"serverRendered\\":1},false]</script>\\n <script>window.__NUXT__ = {}; window.__NUXT__.config = { public: {}, app: { baseURL: \\"/\\", buildId: \\"dev\\", buildAssetsDir: \\"/_nuxt/\\", cdnURL: \\"\\" } }</script>\\n</body>\\n</html>\\n\\n
The above HTML does not include the application state but contains a root element, such as <div id=\\"__nuxt\\">.
The browser will download the JavaScript files, then the client-side JavaScript mounts the Vue application onto the element, initializes the state, and renders the full HTML in the browser.
However, for large applications using CSR, the initial load time can be significantly slower. Search engines may struggle to crawl and index content in CSR-rendered pages, potentially resulting in lower search result rankings. To improve user experience and optimize SEO, we can utilize SSR.
\\nWhen using the default universal mode, which enables SSR, the server renders the HTML with the stock prices before sending it to the client:
\\n<!DOCTYPE html>\\n<html data-capo=\\"\\">\\n <head>\\n <meta charset=\\"utf-8\\">\\n <meta name=\\"viewport\\" content=\\"width=device-width, initial-scale=1\\">\\n // ... removed for simplicity\\n </head>\\n <body>\\n <div id=\\"__nuxt\\">\\n <!--[--\x3e\\n <div data-v-inspector=\\"pages/stock1.vue:2:5\\">\\n <h2 data-v-inspector=\\"pages/stock1.vue:3:7\\">Stock Prices</h2>\\n <ul data-v-inspector=\\"pages/stock1.vue:4:7\\">\\n <!--[--\x3e\\n <li data-v-inspector=\\"pages/stock1.vue:5:9\\">AAPL: 150</li>\\n <li data-v-inspector=\\"pages/stock1.vue:5:9\\">GOOGL: 2000</li>\\n <li data-v-inspector=\\"pages/stock1.vue:5:9\\">AMZN: 3500</li>\\n <!--]--\x3e\\n </ul>\\n </div>\\n <!--]--\x3e\\n </div>\\n <div id=\\"teleports\\"></div>\\n <script type=\\"application/json\\" data-nuxt-logs=\\"nuxt-app\\">\\n </script>\\n <script type=\\"application/json\\" data-nuxt-data=\\"nuxt-app\\" data-ssr=\\"true\\" id=\\"__NUXT_DATA__\\">\\n // ... removed for simplicity\\n </script>\\n </body>\\n</html>\\n\\n
The HTML content above includes the stock prices pre-generated by the server before being sent to the client. Pre-rendering the HTML on the server improves the initial page load time and enhances SEO by allowing search engines to index the rendered HTML easily.
\\nHowever, SSR is not without its challenges. Hydration mismatch is one of the tricky issues.
\\nHydration is the client-side process of converting the server-rendered HTML into interactive HTML by attaching JavaScript behavior and initializing application states.
\\nThe following sequence diagram illustrates the steps in hydration based on the stock prices example:
\\n\\n\\n
A hydration mismatch error occurs when the browser generates a DOM structure that differs from the server-rendered HTML. These mismatches can lead to visual glitches or unexpected behavior, disrupting the user experience.
\\nOne of the main causes of hydration mismatch is handling dynamic data using SSR. Let’s look at an example:
\\n<script setup lang=\\"ts\\">\\n import { ref } from \'vue\'\\n const stocks = ref([\\n { symbol: \'AAPL\', price: generateRandomPrice() },\\n { symbol: \'GOOGL\', price: generateRandomPrice() },\\n { symbol: \'AMZN\', price: generateRandomPrice() },\\n ])\\n\\n function generateRandomPrice() {\\n return Math.floor(Math.random() * 900) + 100\\n }\\n</script>\\n<template>\\n <div>\\n <h2>Stock Prices</h2>\\n <ul>\\n <li v-for=\\"stock in stocks\\" :key=\\"stock.symbol\\">\\n {{ stock.symbol }}: ${{ stock.price }}\\n </li>\\n </ul>\\n </div>\\n </template>\\n\\n
Here, we use the generateRandomPrice()
function to generate a random price for each stock and use ref
to store the stock price. Everything seems fine, right?
However, when running the above page, we notice that the stock prices flicker, and the following warning is shown in the console:
\\nThe warning message in the console points to the issue:
\\n— rendered on server: GOOGL: $741\\n— expected on client: GOOGL: $282.\\n\\n
The stock prices are generated twice! When the page is initially rendered on the server, the stocks
array is created, and the generateRandomPrice()
function is called to generate random stock prices. Once the initial HTML is sent to the client, the client-side JavaScript takes over and reinitializes the stocks
array with a new set of random prices.
This discrepancy between the server-generated and client-generated random prices results in a hydration mismatch.
\\nref
In Nuxt, ref
is used to create a reactive variable for storing and managing component-level states. While ref
provides a simple way to manage reactive state, it can sometimes lead to Nuxt hydration issues, particularly when used to control the initial rendering of the DOM.
The root cause of the hydration mismatch described above is the use of ref
to store the stock price. Variables created by ref
are not automatically serialized and sent to the client during server-side rendering. As a result, when the client-side JavaScript takes over, it initializes the stocks
array from scratch, leading to a mismatch between the server-rendered and client-side states.
To resolve this issue, we can leverage useState
.
useState
: A hydration-friendly solutionNuxt 3 introduces useState
, a composable that provides a reactive and persistent state across components and requests, making it ideal for managing data that impacts the server-rendered HTML. Unlike ref
, useState
is specifically designed to handle state hydration in Nuxt’s SSR mode. When a page is rendered on the server, the useState
values are serialized and sent to the client. This enables the client-side JavaScript to initialize the state with the values from the server side, avoiding re-running the setup script.
useState
is a composable function with the following type definition. It accepts a unique key and a factory function to initialize the state:
useState<T>(init?: () => T | Ref<T>): Ref<T>\\nuseState<T>(key: string, init?: () => T | Ref<T>): Ref<T>\\n\\n
key
is a unique identifier for the state. It ensures the state is uniquely identified and persisted. If not provided, a key is automatically generated based on the file and line number.
init
is a factory function used to define the initial state and is only called once during the first SSR request.
Here is the updated version of the stock price page using useState
:
<script setup lang=\\"ts\\">\\nconst stocks = useState(\'stocks\', () => [\\n { symbol: \'AAPL\', price: generateRandomPrice() },\\n { symbol: \'GOOGL\', price: generateRandomPrice() },\\n { symbol: \'AMZN\', price: generateRandomPrice() },\\n])\\n\\nfunction generateRandomPrice() {\\n return Math.floor(Math.random() * 900) + 100\\n}\\n</script>\\n\\n<template>\\n <div>\\n <h2>Stock Prices</h2>\\n <ul>\\n <li v-for=\\"stock in stocks\\" :key=\\"stock.symbol\\">\\n {{ stock.symbol }}: ${{ stock.price }}\\n </li>\\n </ul>\\n </div>\\n</template>\\n\\n
Here, we use useState
to store the stock price. This ensures the state is created only once on the server side and shared between the server and the client browser, thus preventing the script from running again on the client side.
Note that data within useState
is serialized to JSON during transmission. Therefore, we should avoid non-serializable data types such as classes, functions, or symbols, as including them will cause runtime errors.
With useState
, we can return a reactive state variable. To mutate the state, we can assign a new value to the value
property of the ref
object, as shown in the example below.
function addStock(symbol) {\\n stocks.value = [...stocks.value, { symbol, price: generateRandomPrice()}]\\n}\\n\\n
Nuxt will automatically detect these changes and update any components that depend on this state, triggering re-renders as necessary.
\\nTo clear the cached state of useState
, we can use clearNuxtState
.
For example, we can add the following function to the previous stock price page:
\\nconst resetState = () => {\\n clearNuxtState(\'stocks\')\\n}\\n...\\n// in template, add a button\\n<button @click=\\"resetState\\">Reset</button>\\n\\n
Here we pass in a stocks
key to delete the cached stocks
state. Calling the utility function without keys will invalidate all states.
shallowRef
to improve performanceWhen using useState
with large, complex objects, any change to a deeply nested property triggers re-renders, even if that property isn’t directly used in the template. This can lead to unnecessary computations and performance bottlenecks.
shallowRef
is a function in Vue 3’s reactivity API that, similar to ref
, creates a reactive reference. However, unlike ref
, it only tracks changes to the top-level value of the reference and does not make nested properties deeply reactive.
To initialize a state with shallowRef
, we use the following syntax:
>useState(\'myState\', () => shallowRef({...}))\\n\\n
Let’s apply shallowRef
to our example:
const stocks = useState(\'stocks\', () => shallowRef([\\n { symbol: \'AAPL\', price: generateRandomPrice() },\\n { symbol: \'GOOGL\', price: generateRandomPrice() },\\n { symbol: \'AMZN\', price: generateRandomPrice() },\\n ]))\\n\\n
Please note that modifying the stocks
array directly won’t trigger a re-render because the array is considered a nested part of the shallowRef
:
// This won\'t trigger a re-rendering\\nfunction addStock(symbol: string) {\\n stocks.value.push({ symbol, price: generateRandomPrice() })\\n}\\n// Assigning a new object to state.value will trigger a re-render because it\'s a top-level change that shallowRef is tracking\\nfunction addStock(symbol: string) {\\n stocks.value = [...stocks.value, { symbol, price: generateRandomPrice() }]\\n}\\n>\\n
useState
vs ref
useState
is designed to handle hydration automatically. Unlike ref
, the state managed by useState
persists between page navigations, making it suitable for data that needs to be shared between components or pages. Instead of relying on prop drilling, we can use useState
to share states across the applications.
For example, in the following case, useState
is used to store and retrieve the authentication state, auth
. The middleware relies on this shared state to determine the user’s login status and make redirection decisions:
// assumes that another part of the app (e.g., a login component) is responsible for populating the auth state appropriately.\\n// Code snippet source: https://nuxt.com/docs/api/utils/define-nuxt-route-middleware\\nexport default defineNuxtRouteMiddleware((to, from) => {\\n const auth = useState(\'auth\')\\n\\n if (!auth.value.isAuthenticated) {\\n return navigateTo(\'/login\')\\n }\\n\\n if (to.path !== \'/dashboard\') {\\n return navigateTo(\'/dashboard\')\\n }\\n})\\n\\n
While useState
is an SSR-friendly ref
replacement, ref
can be still useful in some situations. When dealing with a state that is local to a single component and doesn’t require server-side rendering, using ref
can be more performant, as it avoids the overheads involved in useState
‘s global approach.
Here are some useful tools that help us troubleshoot and resolve Nuxt hydration errors.
\\nNuxt DevTools is an official, powerful suite of visual tools that integrates seamlessly into your development workflow. You can find out how to get started here.
\\nThe following screenshot shows the “State” tab, which provides a real-time view of the application’s state. You can see the values of useState
variables and other reactive data, allowing us to track changes, understand how data is being updated, and identify any unexpected behavior:
Nuxt-hydration is a valuable development tool designed to identify and debug hydration issues in Nuxt applications.
\\nNuxt-hydration helps you identify hydration issues by providing detailed component-level insights and allowing you to view the SSR-rendered HTML.
\\nBelow is a popup highlighting the hydration mismatch issue discussed earlier:
\\n\\n
While useState
is a convenient option for simple data sharing in Nuxt, we may need to require dedicated state management solutions for more complex applications. A popular choice is Pinia.
Pinia is the officially recommended state management library for Vue, making it a natural fit for Nuxt applications.
\\nPinia builds on the concepts of useState
but offers a more robust and scalable solution for state management as our application’s complexity increases. Pinia’s API, which closely resembles Vue’s Composition API, makes it easy to learn and use. It promotes a modular approach by encouraging the creation of separate stores for different parts of the application, significantly improving code organization and maintainability.
The choice between Pinia and useState
largely depends on the application’s complexity. For simple use cases, useState
is a solid choice, providing an enhancement over ref
. However, as projects scale, Pinia’s richer features and inherent scalability provide clear advantages.
In this article, we’ve explored Nuxt state management and rendering, diving deep into the nuances of useState
. We’ve seen how useState
is essential for managing state across components and unlike ref
, it effectively addresses Nuxt hydration challenges by design. This makes it the preferred choice for managing state in many Nuxt applications.
Additionally, tools like Nuxt DevTools and nuxt-hydration are invaluable for debugging, and Pinia offers a powerful solution for larger projects.
\\nUnderstanding these concepts is important for building efficient and scalable Nuxt applications. I hope this article has been helpful! The code examples in the article are available here.
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nScrollView
\\n FlatList
\\n SectionList
\\n FlashList
\\n In mobile development, it is crucial to be able to efficiently display lists of data. As React Native developers, we have access to several powerful tools for its purpose. Whether we’re using the Scrollview
, SectionList
, or FlatList
components, React Native provides a suite of options for handling data display.
However, as datasets grow more complex and performance demands increase, the need for a better-optimized solution becomes essential. Enter Shopify’s FlashList
— a game-changer that offers significant improvements over traditional list components.
In this article, we’ll explore the evolution of list components in React Native, examining the limitations of ScrollView
and the advancements brought by FlatList
, SectionList
, and most recently, FlashList
, which enhances both performance and the development experience.
ScrollView
The ScrollView
component is a simple but limited option for rendering or displaying lists of data. It renders all the child components — i.e., the entire list of data — at once, regardless of its size.
See the code below:
\\nimport { StyleSheet, Text, ScrollView } from \'react-native\';\\nimport { SafeAreaView, SafeAreaProvider } from \'react-native-safe-area-context\';\\n\\nconst data = [\\n \'Alice\',\\n \'Bob\',\\n \'Charlie\',\\n \'Diana\',\\n \'Edward\',\\n \'Fiona\',\\n \'George\',\\n \'Hannah\',\\n \'Ian\',\\n \'Jasmine\',\\n \'Kevin\',\\n \'Liam\',\\n \'Mia\',\\n \'Nathan\',\\n \'Olivia\',\\n \'Patrick\',\\n \'Quinn\',\\n \'Rebecca\',\\n \'Samuel\',\\n \'Tina\',\\n \'Quinn\',\\n \'Rebecca\',\\n \'Samuel\',\\n \'Tina\',\\n];\\n\\nconst App = () => {\\n return (\\n <SafeAreaProvider>\\n <SafeAreaView style={styles.container} edges={[\'top\']}>\\n <ScrollView>\\n {data.map((item, idx) => (\\n <Text key={idx} style={styles.item}>\\n {item}\\n </Text>\\n ))}\\n </ScrollView>\\n </SafeAreaView>\\n </SafeAreaProvider>\\n );\\n};\\nexport default App;\\n\\nconst styles = StyleSheet.create({\\n container: {\\n flex: 1,\\n paddingTop: 22,\\n },\\n item: {\\n padding: 10,\\n fontSize: 18,\\n height: 44,\\n },\\n});\\n\\n
The code’s output would look like this:
\\nThis is a typical approach for rendering lists of data using ScrollView
. However, it can cause performance issues when handling large datasets. Rendering a large dataset all at once consumes excessive memory because ScrollView
lacks virtualization or lazy loading capabilities, which results in slow data rendering.
FlatList
To solve the above performance bottleneck, React Native introduced the FlatList
component, which optimizes rendering performance by using virtualization. This means that FlatList
renders items lazily — only items visible on the screen are rendered and items that are not on the screen viewport are removed. By doing so, this component saves memory and processing time, making FlatList
the better option for rendering long lists or large datasets.
FlatList
‘s main features include:
ScrollToIndex
supportSee the code below:
\\nimport { StyleSheet, Text, FlatList } from \'react-native\';\\nimport { SafeAreaView, SafeAreaProvider } from \'react-native-safe-area-context\';\\n\\nconst data = [\\n \'Alice\',\\n \'Bob\',\\n \'Charlie\',\\n \'Diana\',\\n \'Edward\',\\n \'Fiona\',\\n \'George\',\\n \'Hannah\',\\n \'Ian\',\\n \'Jasmine\',\\n \'Kevin\',\\n \'Liam\',\\n \'Mia\',\\n \'Nathan\',\\n \'Olivia\',\\n \'Patrick\',\\n \'Quinn\',\\n \'Rebecca\',\\n \'Samuel\',\\n \'Tina\',\\n \'Quinn\',\\n \'Rebecca\',\\n \'Samuel\',\\n \'Tina\',\\n];\\n\\nconst App = () => {\\n return (\\n <SafeAreaProvider>\\n <SafeAreaView style={styles.container} edges={[\'top\']}>\\n <FlatList\\n data={data}\\n keyExtractor={(_: string, index: { toString: () => string }) =>\\n index.toString()\\n }\\n renderItem={({ item }: { item: string }) => (\\n <Text style={styles.item}>{item}</Text>\\n )}\\n />\\n </SafeAreaView>\\n </SafeAreaProvider>\\n );\\n};\\nexport default App;\\n\\nconst styles = StyleSheet.create({\\n container: {\\n flex: 1,\\n paddingVertical: 22,\\n },\\n item: {\\n padding: 10,\\n fontSize: 18,\\n height: 44,\\n },\\n});\\n\\n
Unlike ScrollView
, FlatList
does the mapping for you. It has a keyExtractor
, which is used to extract a unique key for your dataset.
SectionList
SectionList
is similar to FlatList
, as it is built on top of FlatList
with added support for section headers. It is specifically designed for grouped or categorized data. However, it also inherits the limitations and performance bottlenecks of FlatList
.
SectionList
‘s main features include:
ScrollToIndex
supportSee the code below:
\\nimport { StyleSheet, Text, SectionList, View } from \'react-native\';\\nimport { SafeAreaView, SafeAreaProvider } from \'react-native-safe-area-context\';\\nconst data = [\\n {\\n title: \'Main dishes\',\\n data: [\'Pizza\', \'Burger\', \'Risotto\'],\\n },\\n {\\n title: \'Sides\',\\n data: [\'French Fries\', \'Onion Rings\', \'Fried Shrimps\'],\\n },\\n {\\n title: \'Drinks\',\\n data: [\'Water\', \'Coke\', \'Beer\'],\\n },\\n {\\n title: \'Desserts\',\\n data: [\'Cheese Cake\', \'Ice Cream\'],\\n },\\n {\\n title: \'Sides\',\\n data: [\'French Fries\', \'Onion Rings\', \'Fried Shrimps\'],\\n },\\n {\\n title: \'Main dishes\',\\n data: [\'Pizza\', \'Burger\', \'Risotto\'],\\n },\\n];\\nconst App = () => (\\n <SafeAreaProvider>\\n <SafeAreaView style={styles.container} edges={[\'top\']}>\\n <SectionList\\n sections={data}\\n keyExtractor={(item: string, index: number) => item + index}\\n renderItem={({ item }: { item: string }) => (\\n <View style={styles.item}>\\n <Text style={styles.title}>{item}</Text>\\n </View>\\n )}\\n renderSectionHeader={({\\n section: { title },\\n }: {\\n section: { title: string };\\n }) => <Text style={styles.header}>{title}</Text>}\\n />\\n </SafeAreaView>\\n </SafeAreaProvider>\\n);\\nexport default App;\\n\\nconst styles = StyleSheet.create({\\n container: {\\n flex: 1,\\n },\\n item: {\\n padding: 10,\\n fontSize: 18,\\n },\\n header: {\\n padding: 10,\\n fontSize: 20,\\n backgroundColor: \'#ddd\',\\n },\\n});\\n\\n
In the code above, renderSectionHeader
is used to display the header of each section. In iOS, the section headers stick to the top of the ScrollView
by default.
See the output below:
\\nFlashList
Shopify’s FlashList
takes performance optimization to the next level. Built with memory management techniques and optimized rendering, FlashList
is designed to effortlessly and efficiently handle massive datasets while maintaining an API similar to FlatList
. A large part of the appeal of FlashList
is that the performance is better, and you don’t have to change the API invocation of your code too much.
Key features of FlashList
include:
FlatList
FlashList
offers smooth scrolling and lower memory usage, even for large and complex datasetsFlatList
to FlashList
requires minimal code modification, which we’ll demonstrate in the example belowTo use FlashList
, you have to first install it. To do so, run either of the commands below:
/* yarn */\\nyarn add @shopify/flash-list\\n\\n/* expo */\\nnpx expo install @shopify/flash-list\\n\\n
Then you can use it like so:
\\nimport { StyleSheet, Text } from \'react-native\';\\nimport { SafeAreaView, SafeAreaProvider } from \'react-native-safe-area-context\';\\nimport { FlashList } from \'@shopify/flash-list\';\\nconst data = [\\n \'Alice\',\\n \'Bob\',\\n \'Charlie\',\\n \'Diana\',\\n \'Edward\',\\n \'Fiona\',\\n \'George\',\\n \'Hannah\',\\n \'Ian\',\\n \'Jasmine\',\\n \'Kevin\',\\n \'Liam\',\\n \'Mia\',\\n \'Nathan\',\\n \'Olivia\',\\n \'Patrick\',\\n \'Quinn\',\\n \'Rebecca\',\\n \'Samuel\',\\n \'Tina\',\\n \'Jasmine\',\\n \'Kevin\',\\n \'Liam\',\\n \'Mia\',\\n \'Nathan\',\\n \'Olivia\',\\n \'Patrick\',\\n \'Quinn\',\\n \'Rebecca\',\\n \'Samuel\',\\n \'Tina\',\\n \'Jasmine\',\\n \'Kevin\',\\n \'Liam\',\\n \'Mia\',\\n \'Nathan\',\\n \'Olivia\',\\n \'Patrick\',\\n \'Quinn\',\\n \'Rebecca\',\\n \'Samuel\',\\n \'Tina\',\\n \'Jasmine\',\\n \'Kevin\',\\n \'Liam\',\\n \'Mia\',\\n \'Nathan\',\\n \'Olivia\',\\n \'Patrick\',\\n \'Quinn\',\\n \'Rebecca\',\\n \'Samuel\',\\n \'Tina\',\\n];\\nconst App = () => (\\n <SafeAreaProvider>\\n <SafeAreaView style={styles.container} edges={[\'top\']}>\\n <FlashList\\n data={data}\\n renderItem={({ item }) => <Text style={styles.item}>{item}</Text>}\\n keyExtractor={(_, index) => index.toString()}\\n estimatedItemSize={20}\\n />\\n </SafeAreaView>\\n </SafeAreaProvider>\\n);\\nexport default App;\\nconst styles = StyleSheet.create({\\n container: {\\n flex: 1,\\n },\\n item: {\\n padding: 10,\\n fontSize: 18,\\n },\\n});\\n\\n
The output will look like this:
\\nIf your app deals with large datasets or you’re seeking the smoothest user experience, consider migrating to FlashList
. With its simple API and superior performance, it’s the future of list components in React Native.
Having looked at the evolution of the different list components in React Native, their use cases, and performance capabilities, let’s quickly compare them:
\\nList component | \\nRendering method | \\nUse case | \\nPerformance | \\n
---|---|---|---|
ScrollView | \\nRenders all items at once | \\nSmall datasets | \\nPoor | \\n
FlatList | \\nVirtualized rendering | \\nLarge datasets | \\nGood | \\n
SectionList | \\nVirtualized rendering | \\nCategorized datasets | \\nGood | \\n
FlashList | \\nHighly optimized | \\nVery large datasets | \\nExcellent | \\n
React Native has always provided developers with several tools for displaying lists of data. Among these tools, FlashList
stands out as an exceptionally performant option, offering unique features that complement existing solutions like SectionList
and FlatList
. While components like ScrollView
, SectionList
, and FlatList
still work fine, Shopify’s FlashList
has raised the bar for performance and by adopting it, developers can build an efficient application without altering their existing codes.
Hey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nEvery day, new AI products and tools emerge, making it feel like AI is taking the world by storm — and for good reason. AI assistants can be incredibly useful across various domains, including ecommerce, customer support, media and content creation, marketing, education, and more. The significance and utility of AI are undeniable.
\\nHaving expertise in AI in today’s rapidly evolving landscape can give you a huge advantage. The skill to build and ship AI agents is becoming increasingly sought after as the demand for AI-powered solutions continues to grow. The good news is that you don’t need to be an AI/ML expert to build AI agents and products. With the right toolset, building AI agents can be both accessible and enjoyable.
\\nThis tutorial will guide you through building AI agents from scratch. We’ll build, deploy, implement, and test a webpage FAQ generator AI agent within a frontend project. This agent will generate a selected number of FAQs based on specified topics and keywords. We’ll also explore how to enhance the accuracy of an AI agent by instructing it to use a predefined set of documents instead of relying solely on web data.
\\nAs with any development project, success hinges on choosing the right platform and tools. Here’s our powerful tech stack for building AI agents:
\\nThe BaseAI and Langbase duo is a powerful and flexible professional toolset for building and deploying AI agents and products with great DX. Developers can build, mix and match, test, and deploy AI agents and use them to create powerful AI products fast, easy, and at low cost. All major LLMs are supported and can be used with one unified API.
\\nGet excited, because by the end of this post, you’ll be well on your way to creating your very own AI agents like a pro. Let’s get started!
\\nAn AI agent is a software program that uses artificial intelligence to perform tasks or make decisions on its own, often interacting with users or systems. It can be a chatbot, virtual assistant, or any tool that learns from data and automates processes, making things easier and faster.
\\nTo use BaseAI efficiently, you need to understand the main functionalities it offers:
\\nIn this tutorial, we’ll explore the first two: AI pipes and memory.
\\nBaseAI works in close relation with Langbase, which provides a versatile AI Studio for building, testing, and deploying AI agents. The first step is to create a free account with Langbase. Once you have an account, you need to set up two things:
\\nNow you are ready to start using BaseAI!
\\nLet’s create a new Node project:
\\nmkdir building-ai-agents && cd building-ai-agents\\nnpm init -y\\nnpm install dotenv\\n\\n
Now, let’s initialize the new BaseAI project inside:
\\nnpx baseai@latest init\\n\\n
Normally, the base project structure looks like this:
\\nROOT (of your app)\\n├── baseai\\n| ├── baseai.config.ts\\n| ├── memory\\n| ├── pipes\\n| └── tools\\n├── .env (your env file)\\n└── package.json\\n\\n
Right now, your project can differentiate a bit. You may notice that in your project, the memory
, pipes
, and tools
directories are missing. Don’t worry — these are auto-generated when you create at least one memory, pipe, or tool respectively.
Also before you start building AI agents, you need to add the Langbase API key
and OpenAI API key
in the project’s .env
file. Rename the env.baseai.example
file to .env
and put the API keys in the appropriate places:
# !! SERVER SIDE ONLY !!\\n# Keep all your API keys secret — use only on the server side.\\n\\n# TODO: ADD: Both in your production and local env files.\\n# Langbase API key for your User or Org account.\\n# How to get this API key https://langbase.com/docs/api-reference/api-keys\\nLANGBASE_API_KEY=\\"YOUR-LANGBASE-KEY\\"\\n\\n# TODO: ADD: LOCAL ONLY. Add only to local env files.\\n# Following keys are needed for local pipe runs. For providers you are using.\\n# For Langbase, please add the key to your LLM keysets.\\n# Read more: Langbase LLM Keysets https://langbase.com/docs/features/keysets\\nOPENAI_API_KEY=\\"YOUR-OPENAI-KEY\\"\\nANTHROPIC_API_KEY=\\nCOHERE_API_KEY=\\nFIREWORKS_API_KEY=\\nGOOGLE_API_KEY=\\nGROQ_API_KEY=\\nMISTRAL_API_KEY=\\nPERPLEXITY_API_KEY=\\nTOGETHER_API_KEY=\\nXAI_API_KEY=\\n\\n
N.B., the baseai.config.ts
file provides several configuration settings, one of which is to change the name of your .env
file to suit your needs. You can do this by setting the envFilePath
property.
In this section, we’ll create your first AI agent — a webpage FAQ generator that generates a specified number of question-answer pairs about specific topics and keywords, with the selected tone.
\\n\\nTo create a new pipe, run the following:
\\nnpx baseai@latest pipe\\n\\n
The CLI will ask you for the name and description of the pipe, and whether it will be public or private. Set the name to “faqs-generator” and the description to “A webpage FAQs generator”. Finally, make the pipe private.
\\nOnce the pipe is created, you can find it in baseai/pipes/faqs-generator.ts
. Open it and replace the content with this:
import { PipeI } from \'@baseai/core\';\\n\\nconst pipeFaqsGenerator = (): PipeI => ({\\n // Replace with your API key https://langbase.com/docs/api-reference/api-keys\\n apiKey: process.env.LANGBASE_API_KEY!,\\n name: \'faqs-generator\',\\n description: \'A webpage FAQs generator\',\\n status: \'private\',\\n model: \'openai:gpt-4o-mini\',\\n stream: true,\\n json: false,\\n store: true,\\n moderate: true,\\n top_p: 1,\\n max_tokens: 1000,\\n temperature: 0.7,\\n presence_penalty: 1,\\n frequency_penalty: 1,\\n stop: [],\\n tool_choice: \'auto\',\\n parallel_tool_calls: true,\\n messages: [\\n {\\n role: \'system\',\\n content: `You\'re a helpful AI assistant. Generate {{count}} frequently asked questions (FAQs) about {{topic}} using the keywords {{keywords}}. \\nEach FAQ should consist of a question followed by a concise answer. Ensure the answers are clear, accurate, and helpful for someone who is unfamiliar with the topic. Keep the tone {{tone}}.\\n`\\n }\\n ],\\n variables: [\\n { name: \'count\', value: \'\' }, \\n { name: \'topic\', value: \'\' }, \\n { name: \'keywords\', value: \'\' }, \\n { name: \'tone\', value: \'\' }],\\n memory: [],\\n tools: []\\n});\\nexport default pipeFaqsGenerator;\\n\\n
As you can see, the system prompt has now changed to suit our specific needs for FAQ generation:
\\n\\"You\'re a helpful AI assistant. Generate {{count}} frequently asked questions (FAQs) about {{topic}} using the keywords {{keywords}}. \\nEach FAQ should consist of a question followed by a concise answer. Ensure the answers are clear, accurate, and helpful for someone who is unfamiliar with the topic. Keep the tone {{tone}}.\\"\\n\\n
BaseAI allows you to use variables in your prompts. You can turn any text into a variable by putting it between {{}}
. So in our case, we need to create four variables:
count
: Sets the number of the FAQs we want to be generatedtopic
: Sets the main topic of the FAQskeywords
: Adds additional keywords to make the topic more specifictone
: Defines the tone of the generated contentThese variables are provided when you run the pipe. We’ll explore this in a moment.
\\nOnce we have created the pipe, we need to put it into action. Create a index.ts
file in the root and add this content:
import \'dotenv/config\';\\nimport {Pipe, getRunner} from \'@baseai/core\';\\nimport pipeFaqsGenerator from \'./baseai/pipes/faqs-generator\';\\n\\nconst pipe = new Pipe(pipeFaqsGenerator());\\n\\nasync function main() {\\n const {stream} = await pipe.run({\\n messages: [],\\n variables: [\\n { name: \'count\', value: \'3\' }, \\n { name: \'topic\', value: \'money\' }, \\n { name: \'keywords\', value: \'investment\' }, \\n { name: \'tone\', value: \'informative\' }],\\n stream: true\\n });\\n\\n const runner = getRunner(stream);\\n runner.on(\'connect\', () => {\\n console.log(\'Stream started.\\\\n\');\\n });\\n runner.on(\'content\', content => {\\n process.stdout.write(content);\\n });\\n runner.on(\'end\', () => {\\n console.log(\'\\\\nStream ended.\');\\n });\\n runner.on(\'error\', error => {\\n console.error(\'Error:\', error);\\n });\\n}\\n\\nmain();\\n\\n
Here, we run the pipe with the variables we want to use. We want to stream the response so we also set stream
property to true
. We use the extracted stream from the response and turn it into a runner. Next, we use it to stream the content. Let’s try it out.
To run the pipe, you first need to start the dev server:
\\nnpx baseai@latest dev\\n\\n
Then, in a new terminal, run the index.ts
file:
npx tsx index.ts\\n\\n
In a moment you should see the streamed content in your CLI. Congratulations! You have just built your first AI agent with ease.
\\nBaseAI gives you the ability to build and test AI agents locally but to use it in production, you need to deploy it to Langbase. Here’s how to do so.
\\nFirst, you need to authenticate with your Langbase account:
\\nnpx baseai@latest auth\\n\\n
Once you have successfully authenticated, deploying your pipe is a matter of running the following command:
\\nnpx baseai@latest deploy\\n\\n
Once deployed, you can access your pipe and explore all its settings and features in the Langbase AI Studio. This gives you much more power to explore and experiment with your AI agent in a user-friendly environment.
\\nThe FAQ generator is great for general questions but what if customers want to ask specific questions about your products or services? Then you can create a pipe with memory implementing the RAG technology.
\\nRAG, or Retrieval Augmented Generation, allows you to chat with your data. Imagine that I have read a book and then you ask me questions related to the book. I would use my memories of the book’s content to answer your questions.
\\n\\nSimilarly, when you ask a RAG AI agent a question, it uses its embedded memory to retrieve the necessary information about the answer. This reduces AI hallucinations and provides more accurate and relevant responses.
\\nIn our project, we’re going to create a pipe with memory where we’ll embed a set of documents to be used as a knowledge base.
\\nTo create a new memory, run the following:
\\nnpx baseai@latest memory\\n\\n
The CLI will ask you for the memory name and description. You can call it “knowledge-base” and use whatever description you want. Leave the answer for “Do you want to create memory from current project git repository?” as “no”.
\\nThis will create a baseai/memory/knowledge-base
directory with an index.ts
file inside:
import {MemoryI} from \'@baseai/core\';\\nconst memoryKnowledgeBase = (): MemoryI => ({\\n name: \'knowledge-base\',\\n description: \\"My knowledge base\\",\\n git: {\\n enabled: false,\\n include: [\'documents/**/*\'],\\n gitignore: false,\\n deployedAt: \'\',\\n embeddedAt: \'\'\\n }\\n});\\nexport default memoryKnowledgeBase;\\n\\n
The next step is to add your data. Open this tutorial of mine, copy all the text, and put it in a tailwind-libraries.txt
file. Next, add the file in baseai/memory/knowledge-base/documents
.
N.B., Langbase currently supports .txt
, .pdf
, .md
, .csv
, and all major plain code files. A single file size can be a maximum of 10MB.
Now we need to embed the memory to generate embeddings for the documents. To create memory embeddings, run the following:
\\nnpx baseai@latest embed -m knowledge-base\\n\\n
Make sure to add OPENAI_API_KEY
to the .env
file at the root of your project. This is required to generate embeddings for the documents in the memory. BaseAI will generate embeddings for the documents and create a semantic index for search.
Now let’s create a new pipe and add the memory we’ve just created to it:
\\nnpx baseai@latest pipe\\n\\n
Set the pipe name to “knowledge-base-rag”. BaseAI automatically detects when you have memory so it will ask you which one you want to use in your pipe. Select knowledge-base
, and use this for the system prompt:
You are a helpful AI assistant.You provide the best, concise, and correct answers to the user\'s questions.\\n\\n
Here is the generated pipe:
\\nimport { PipeI } from \'@baseai/core\';\\nimport knowledgeBaseMemory from \'../memory/knowledge-base\';\\nconst pipeKnowledgeBaseRag = (): PipeI => ({\\n // Replace with your API key https://langbase.com/docs/api-reference/api-keys\\n apiKey: process.env.LANGBASE_API_KEY!,\\n name: \'knowledge-base-rag\',\\n description: \'A knowledge base with RAG functionality\',\\n status: \'private\',\\n model: \'openai:gpt-4o-mini\',\\n stream: true,\\n json: false,\\n store: true,\\n moderate: true,\\n top_p: 1,\\n max_tokens: 1000,\\n temperature: 0.7,\\n presence_penalty: 1,\\n frequency_penalty: 1,\\n stop: [],\\n tool_choice: \'auto\',\\n parallel_tool_calls: true,\\n messages: [\\n {\\n role: \'system\',\\n content: `You are a helpful AI assistant.You provide the best, concise, and correct answers to the user\'s questions.`\\n },\\n {\\n role: \'system\',\\n name: \'rag\',\\n content:\\n \\"Below is some CONTEXT for you to answer the questions. ONLY answer from the CONTEXT. CONTEXT consists of multiple information chunks. Each chunk has a source mentioned at the end.\\\\n\\\\nFor each piece of response you provide, cite the source in brackets like so: [1].\\\\n\\\\nAt the end of the answer, always list each source with its corresponding number and provide the document name. like so [1] Filename.doc.\\\\n\\\\nIf you don\'t know the answer, just say that you don\'t know. Ask for more context and better questions if needed.\\"\\n }\\n ],\\n variables: [],\\n memory: [knowledgeBaseMemory()],\\n tools: []\\n});\\nexport default pipeKnowledgeBaseRag;\\n\\n
BaseAI automatically adds a RAG system prompt that is suitable for most use cases but you can customize it to your needs. It helps the AI model understand the context of the conversation and generate responses that are relevant, accurate, and grammatically correct. Now, let’s test it. Create an index-rag.ts
file in the root and add the following content:
import \'dotenv/config\';\\nimport {Pipe, getRunner} from \'@baseai/core\';\\nimport pipeKnowledgeBaseRag from \'./baseai/pipes/knowledge-base-rag\';\\n\\nconst pipe = new Pipe(pipeKnowledgeBaseRag());\\n\\nasync function main() {\\n const {stream} = await pipe.run({\\n messages: [{role: \'user\', content: \'Which Tailwind CSS component library provides the most components?\'}],\\n stream: true\\n });\\n\\n const runner = getRunner(stream);\\n runner.on(\'connect\', () => {\\n console.log(\'Stream started.\\\\n\');\\n });\\n runner.on(\'content\', content => {\\n process.stdout.write(content);\\n });\\n runner.on(\'end\', () => {\\n console.log(\'\\\\nStream ended.\');\\n });\\n runner.on(\'error\', error => {\\n console.error(\'Error:\', error);\\n });\\n}\\nmain();\\n\\n
Now, to run the pipe, make sure the dev server is running. Then run the index-rag.ts
file:
npx tsx index-rag.ts\\n\\n
After a moment, you should see something similar in your terminal:
\\n**Tailwind Elements** provides the most components, with a huge set of more than 500 UI components. These components range from very simple elements like headings and icons to more complex ones like charts and complete forms, making it suitable for almost any kind of project [1].\\n\\nSources:\\n[1] tailwind-libraries.txt\\n\\n
Here, the AI agent uses the provided data to answer the question.
\\nIn this section, we’ll explore a simple example of how you can use AI agents in a Next.js frontend app.
\\nStart by running the following:
\\nnpx create-next-app@latest\\n\\n
Accept all default settings. When the app is set up, create an actions.ts
file in the app
directory with the following content:
\'use server\';\\n\\nexport async function generateCompletion(count: string, topic: string, keywords: string, tone: string) {\\n const url = \'https://api.langbase.com/v1/pipes/run\';\\n const apiKey = \'PIPE-API-KEY\';\\n\\n const data = {\\n messages: [],\\n variables: [\\n { name: \'count\', value: count }, \\n { name: \'topic\', value: topic }, \\n { name: \'keywords\', value: keywords }, \\n { name: \'tone\', value: tone }]\\n };\\n\\n const response = await fetch(url, {\\n method: \'POST\',\\n headers: {\\n \'Content-Type\': \'application/json\',\\n Authorization: `Bearer ${apiKey}`\\n },\\n body: JSON.stringify(data)\\n });\\n\\n const resText = await response.json();\\n return resText;\\n}\\n\\n
Here, we have a function that generates an AI answer completion. You need to replace <PIPE_API_KEY>
with your Pipe API key. To get it, open your pipe in Langbase and click on the API tab next to the selected Pipe tab, and copy the API key from there.
Now, open page.tsx
and replace its contents with the following:
\'use client\';\\n\\nimport { useState } from \'react\';\\nimport { generateCompletion } from \'./actions\';\\n\\nexport default function Home() {\\n const url = \'https://api.langbase.com/v1/pipes/run\';\\n const apiKey = \'YOUR-PIPE-KEY\';\\n\\n const [count, setCount] = useState(\'\');\\n const [topic, setTopic] = useState(\'\');\\n const [keywords, setKeywords] = useState(\'\');\\n const [tone, setTone] = useState(\'\');\\n const [completion, setCompletion] = useState(\'\');\\n const [loading, setLoading] = useState(false);\\n\\n const handleGenerateCompletion = async () => {\\n setLoading(true);\\n const {completion} = await generateCompletion(count, topic, keywords, tone)\\n setCompletion(completion);\\n setLoading(false)\\n };\\n return (\\n<main className=\\"flex min-h-screen flex-col items-center justify-between p-24\\">\\n <div className=\\"flex flex-col items-center\\">\\n <h1 className=\\"text-4xl font-bold\\">\\n Generate FAQs\\n </h1>\\n <p className=\\"mt-4 text-lg\\">\\n Enter a topic and click the button to generate FAQs using LLM\\n </p>\\n <input type=\\"text\\" placeholder=\\"Enter a topic\\"\\n className=\\"w-1/2 m-3 rounded-lg border border-slate-300 bg-slate-200 p-3 text-sm text-slate-800 shadow-md focus:border-blue-600 focus:outline-none focus:ring-1 focus:ring-blue-600 dark:border-slate-200/10 dark:bg-slate-800 dark:text-slate-200 dark:placeholder-slate-400 dark:focus:border-blue-600 sm:text-base\\"\\n value={topic} onChange={e=> setTopic(e.target.value)}\\n />\\n <input type=\\"text\\" placeholder=\\"Enter keywords\\"\\n className=\\"w-1/2 m-3 rounded-lg border border-slate-300 bg-slate-200 p-3 text-sm text-slate-800 shadow-md focus:border-blue-600 focus:outline-none focus:ring-1 focus:ring-blue-600 dark:border-slate-200/10 dark:bg-slate-800 dark:text-slate-200 dark:placeholder-slate-400 dark:focus:border-blue-600 sm:text-base\\"\\n value={keywords} onChange={e=> setKeywords(e.target.value)}\\n />\\n <input type=\\"text\\" placeholder=\\"Enter a tone\\"\\n className=\\"w-1/2 m-3 rounded-lg border border-slate-300 bg-slate-200 p-3 text-sm text-slate-800 shadow-md focus:border-blue-600 focus:outline-none focus:ring-1 focus:ring-blue-600 dark:border-slate-200/10 dark:bg-slate-800 dark:text-slate-200 dark:placeholder-slate-400 dark:focus:border-blue-600 sm:text-base\\"\\n value={tone} onChange={e=> setTone(e.target.value)}\\n />\\n <input type=\\"text\\" placeholder=\\"Enter a count\\"\\n className=\\"w-1/2 m-3 rounded-lg border border-slate-300 bg-slate-200 p-3 text-sm text-slate-800 shadow-md focus:border-blue-600 focus:outline-none focus:ring-1 focus:ring-blue-600 dark:border-slate-200/10 dark:bg-slate-800 dark:text-slate-200 dark:placeholder-slate-400 dark:focus:border-blue-600 sm:text-base\\"\\n value={count} onChange={e=> setCount(e.target.value)}\\n />\\n <button onClick={handleGenerateCompletion}\\n className=\\"inline-flex items-center gap-x-2 m-3 rounded-lg bg-blue-600 px-4 py-2.5 text-center text-base font-medium text-slate-50 hover:bg-blue-800 focus:ring-4 focus:ring-blue-200 dark:focus:ring-blue-900\\">\\n Generate FAQs\\n <svg xmlns=\\"http://www.w3.org/2000/svg\\" className=\\"h-4 w-4\\" viewBox=\\"0 0 24 24\\" strokeWidth=\\"2\\"\\n stroke=\\"currentColor\\" fill=\\"none\\" strokeLinecap=\\"round\\" strokeLinejoin=\\"round\\">\\n <path stroke=\\"none\\" d=\\"M0 0h24v24H0z\\" fill=\\"none\\"></path>\\n <path d=\\"M10 14l11 -11\\"></path>\\n <path d=\\"M21 3l-6.5 18a.55 .55 0 0 1 -1 0l-3.5 -7l-7 -3.5a.55 .55 0 0 1 0 -1l18 -6.5\\"></path>\\n </svg>\\n </button>\\n {loading && <p className=\\"mt-4\\">Loading...</p>}\\n {completion && (\\n <textarea readOnly value={completion} cols={100} rows={20}\\n className=\\"w-full bg-slate-50 p-10 text-base text-slate-900 focus:outline-none dark:bg-slate-800 dark:text-slate-200 dark:placeholder-slate-400\\" />\\n )}\\n </div>\\n</main>\\n );\\n}\\n\\n
Here we created the necessary inputs for the pipe’s variables and added a textarea for the AI-generated response.
\\nBefore running the app, go to the pipe in Langbase and, in the right sidebar in the Meta panel, turn the Stream mode to Off
.
Now run the app and test it:
\\nnpm run dev\\n\\n
Here is what it should look like:
\\nHere is a prompt example:
\\nAnd here is the AI completion response:
\\nIn this tutorial, we explored the benefits of building your own AI agents. We did so by building a simple but powerful webpage FAQ generator. We also learned how to add memory to an AI agent to take advantage of RAG technology. Finally, we integrated the FAQ generator AI agent into a Next.js app.
\\nThe future belongs to AI and gaining expertise in this area will offer you a big advantage.
\\nTo learn more about building AI agents, don’t forget to check out the BaseAI and Langbase documentation.
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nUI development will always be a pivotal part of web development. But like any other key software element, it keeps evolving — and UI developers must evolve with it.
\\nThe ever-increasing need for online presence has skyrocketed the demand for faster UI development, but the good news is that we now have more tools to help us build UIs within seconds. One such tool is Framer AI, a unique UI dev tool that specializes in instantly developing UIs with AI prompts.
\\nThis article will explore creating UI components with Shadcn and Framer AI. To follow along with this quick tutorial, you must have the following:
\\nShadcn is an open-source UI library that consists of a collection of different UI components built on top of Radix UI primitives and styled with Tailwind CSS. Radix UI is a low-level UI library that focuses on accessibility, which counteracts the questions about Shadcn’s accessibility. Shadcn doesn’t require you to install npm packages, unlike your typical UI component library.
\\nSimply refer to the official Shadcn documentation, then copy and paste the components as they are on your project. You can also make use of the CLI they have made available. Users can work with Shadcn on any framework that supports React components, such as Next.js, Gatsby, etc.
\\nYou have two options to get started with Shadcn. You can either use the CLI tool to automate setup, or manually copy components into your project. Both options give you access to high-quality, customizable components built with Radix UI Primitives and styled with Tailwind CSS.
\\nVisit the official Shadcn documentation and browse through the collection of pre-built components.
\\nOnce you find a component you want, simply copy the provided code and paste it into your project.
\\nFor example, here’s how you can use the Button component:
\\nimport { Button } from \\"@/components/ui/button\\";\\n <Button variant=\\"outline\\">Click Me</Button>;\\n\\n
Shadcn components are built to be flexible. You can use Tailwind CSS to style them and Radix UI primitives to ensure accessibility.
\\nWant a button that looks like a link? Use the buttonVariants
helper:
import { buttonVariants } from \\"@/components/ui/button\\";\\n <Link className={buttonVariants({ variant: \\"outline\\" })}>Click Here</Link>;\\n\\n
Here’s what setting up Shadcn looks like in practice:
\\nFor an even faster setup, Shadcn provides a CLI tool that allows you to create a root project and add components easily. Here’s how you can use it:
\\nRun the following command to initialize a Shadcn project:
\\nnpx shadcn@latest init\\n\\n
This also gives you direct access to all of Shadcn’s customizable components.
\\nAdd specific components to your project using a simple command. For example, to add a button you can use:
\\nnpx shadcn@latest add button \\n\\n
This simple command will automate the process of setting up a reusable, accessible, and customizable button component for your project.
\\nUse Tailwind to style and customize the components as you wish. Radix UI primitives take care of accessibility by default:
\\nFramer AI is a new AI tool that’s generating interest among front-end developers. It’s gaining traction for its ability to generate visually appealing and responsive web pages within seconds. All you have to do is input well-articulated prompts, and the AI tool does the rest.
\\nTo start using Framer AI, you’ll need to download and install the Framer desktop app. Head over to the official Framer download page and download and install the app for your operating system. Framer AI was designed to be intuitive and to simplify the process of creating web pages.
\\nHere’s how to get started:
\\nOpen a project in Framer or create a new one. Go to the Options dropdown at the top-left of the Framer page, then click on File > Generate Page to activate the AI assistant:
\\nEnter a clear prompt that accurately describes the exact type of web page you intend to create.
\\nFor instance, you might say: “Design a product launch page for a team collaboration app called Focus. The page should include a blue and white theme color. I want you to use wavy animations throughout the webpage, and ensure it is responsive on all device sizes from 132px to 1440px.”
\\nHit Start, and within moments, Framer AI generates a fully responsive page that represents what you described:
\\nOnce you’ve created the webpage, you can continue working on it further. The Framer AI environment contains features to help you adjust colors, change typography, and redesign the entire layout as you wish.
\\nFramer AI is unique for its speed and flexibility. Some key features of Framer AI include:
\\n1. AI-powered design
\\nFramer AI uses advanced algorithms to create page layouts based on the prompts you input. It usually releases polished and visually appealing results, which makes Framer AI a great starting point for any project:
2. Responsive by default
\\nEvery page that Framer AI generates is fully responsive, this ensures that the layouts look good on all devices without the additional effort of setting media queries.
3. Customizable outputs
\\nYou’re not locked into the AI’s initial output. You can tweak colors, fonts, and layouts to align with your brand and vision.
4. Multiple suggestions
\\nFramer AI helps with your prompt by suggesting several variations that give you creative options to explore.
5. Integrated workflow
\\nFramer AI integrates seamlessly into Framer’s broader design environment, so it allows you to leverage its features along with other design tools.
6. Time-saving benefits
\\nFramer AI saves the time spent on the initial design phase while prototyping or building a production-ready webpage.
Now let’s build a button component from scratch in one minute using Shadcn and Framer AI. Creating a sleek and functional UI component doesn’t have to take hours. In this walkthrough, we will create a fully functional and styled button component in under 60 seconds.
\\nFramer AI generates the base structure for your components, which simplifies the initial coding process for your project.
\\n\\nOpen Framer AI and provide a descriptive prompt: “Generate a primary button styled with Tailwind CSS, orange in color, with rounded borders.”
\\nCopy the React code provided by Framer AI:
\\nimport React from \'react\';\\n const PrimaryButton = ({ text, onClick }) => {\\n return (\\n <button\\n onClick={onClick}\\n className=\\"bg-blue-500 hover:bg-blue-600 text-white font-bold py-2 px-4 rounded focus:outline-none focus:ring-2 focus:ring-blue-400\\"\\n >\\n {text}\\n </button>\\n );\\n };\\n export default PrimaryButton;\\n\\n
Shadcn offers a structured and reusable way to manage components in your project.
\\nAdd the button component to your project using Shadcn’s CLI:
npx shadcn-cli add button\\n
Replace the default Shadcn button code with the Framer AI-generated code.
\\nThe next step is to make the button uniquely yours. You can add custom Tailwind CSS styling that fits your project design system.
\\nFor example:
\\nclassName=\\"bg-orange-500 hover:bg-orange-600 text-white font-bold py-2 px-4 rounded\\n\\n
Finally, use your new button in your Shadcn project. Import it and place it wherever it fits within your UI like this:
\\nimport React from \'react\';\\n import { Button } from \'./components/Button\';\\n function App() {\\n const handleClick = () => {\\n alert(\\"Button clicked!\\");\\n };\\n return (\\n <div className=\\"App\\">\\n <Button text=\\"Click Me\\" variant=\\"primary\\" onClick={handleClick} />\\n </div>\\n );\\n }\\n export default App;\\n\\n
When building UI components, tools such as Shadcn and Framer AI redefine the process, focusing on speed, flexibility, and accessibility. But how does Shadcn stack up against traditional libraries like Bootstrap and Material UI?
\\nLet’s dive into a side-by-side comparison:
\\nFeature | \\nShadcn | \\nBootstrap | \\nMaterial UI | \\n
---|---|---|---|
Speed | \\nYou can create component projects in seconds with the CLI tool shadcn-cli | \\nRequires some setup but offers pre-designed components | \\nComes with ready-made components but involves customization | \\n
Customization | \\nOffers full flexibility as it allows users to customize the web pages with utility classes | \\nLimited unless you heavily override the component settings | \\nHas a predefined style and structure that limits full creative freedom | \\n
Accessibility | \\nBuilt on Radix Primitives for accessibility by default | \\nAccessibility needs manual effort in some cases | \\nAccessibility is well-supported but can require tweaks | \\n
Learning Curve | \\nEasy to understand and execute into your project | \\nEasy to start but harder to master advanced customizations | \\nRequires familiarity with Google’s Material Design | \\n
Frameworks to work with | \\nWorks with React and frameworks like Next.js | \\nCan be used with most frontend tools | \\nBest for React apps; needs adapters for others | \\n
Pricing | \\nFree for basic use; \\nPro plan starts at $19/month: Unlimited posts, users, analytics, premium support | \\nFree for basic use; \\nPro starts at $15/month, Enterprise at $29/month | \\nFree for basic use; \\nPro starts at $15/month/dev, Premium at $49/month/dev | \\n
Framer AI uses natural language prompts to generate UI components, making it a unique tool in the design-to-code space. How does it compare to other AI tools?:
\\nFeature | \\nFramer AI | \\nUizard | \\nPerplexity AI | \\n
---|---|---|---|
Ease of Use | \\nYou can quickly generate layouts with simple prompts | \\nFocuses on transforming sketches into prototypes | \\nSpeeds up research and design by generating clear, concise, and actionable UI ideas with AI | \\n
Customization | \\nLets you refine generated designs using Tailwind CSS | \\nLimited customization options after generation | \\nOffers intelligent suggestions but limits your ability to edit or customize output | \\n
Responsiveness | \\nGenerates responsive layouts by default | \\nNot all generated layouts are responsive | \\nIts interface prioritizes simplicity over advanced responsiveness, which may not be as flexible for you to create intricate designs | \\n
Pricing | \\nFree to use: \\nMini: $5/month (2 pages, 10 GB bandwidth). \\nBasic: $15/month (1000 pages, 50 GB bandwidth). \\nPro: $30/month (10,000 pages, 100 GB bandwidth) | \\nFree Plan: $0 forever (ideal for students/hobbyists). \\nPro Plan: $12/month (billed annually) | \\nFree to use: \\nPro: $5/month \\n(300 Pro searches/day, Advanced AI models (GPT-4 Omni, Claude 3), File analysis, $5 API credit, Pro Discord & support) | \\n
Shadcn is perfect for developers who seek accessible components that they can scale and customize easily. Its integration with Tailwind and Radix makes it a strong choice for modern React projects.
\\nFramer AI helps you quickly generate UI components and code designs. It’s ideal for teams that want to build quick, customizable UIs that are professional and can serve a purpose.
\\nIn this article, we went over the importance of speed in developing UIs in the modern era. We highlighted Framer AI and Shadcn as tools that are built to help front–end developers create UIs faster. We then explored creating a button UI with these tools and compared them with similar ones.
\\nWith the right tools in hand, creating sleek, functional UIs has never been easier or more accessible.
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nuseState
\\n useState
Hook not update immediately?\\n setState
\\n useEffect
\\n useEffect
for basic side effects\\n useEffect
Hook\\n useEffect
Hooks in a single component\\n useContext
\\n useLayoutEffect
\\n useLayoutEffect
vs. useEffect
\\n useReducer
\\n useReducer
basic usage\\n this.setState
\\n useCallback
\\n useCallback
example\\n useCallback
with referenced function\\n useCallback
with inline function\\n useMemo
\\n useMemo
example\\n useRef
\\n useTransition
\\n useTransition
and regular state update\\n useDeferredValue
\\n useId
\\n useImperativeHandle
\\n useFormStatus
\\n useOptimistic
\\n Editor’s note: This React Hooks tutorial was last updated by Nelson Michael in January 2025 to add hooks that have been released since the original publication date, including useImperativeHandle
, useFormStatus
, and useOptimistic
.
React Hooks revolutionized the way we write React components by introducing a simple yet powerful API for managing state and side effects in functional components. However, with a massive community and countless use cases, it’s easy to get overwhelmed when trying to adopt best practices or troubleshoot common challenges.
\\nIn this tutorial, we’ll outline some best practices when working with React Hooks and highlight some use cases with examples, from simple to advanced scenarios. To help demonstrate how to solve common React Hooks questions, I built an accompanying web app for live interaction with some of the examples from this article.
\\nThis React Hooks cheat sheet includes a lot of code snippets and assumes some Hooks fluency. If you’re completely new to Hooks, you may want to start with our React Hooks API reference guide.
\\nuseState
useState
lets you use local state within a function component. You pass the initial state to this function and it returns a variable with the current state value (not necessarily the initial state) and another function to update this value.
Check out this React useState
video tutorial:
Declaring a state variable is as simple as calling useState
with some initial state value, like so: useState(initialStateValue)
:
const DeclareStateVar = () => {\\n const [count] = useState(100)\\n return <div> State variable is {count}</div>\\n}\\n\\n
Updating a state variable is as simple as invoking the updater function returned by the useState
invocation: const [stateValue, updaterFn] = useState(initialStateValue);
:
Note how the age state variable is being updated.
\\nHere’s the code responsible for the screencast above:
\\nconst UpdateStateVar = () => {\\n const [age, setAge] = useState(19)\\n const handleClick = () => setAge(age + 1)\\n\\n return (\\n <div>\\n Today I am {age} Years of Age\\n <div>\\n <button onClick={handleClick}>Get older! </button>\\n </div>\\n </div>\\n )\\n}\\n\\n
useState
Hook not update immediately?If you find that useState
/setState
are not updating immediately, the answer is simple: they’re just queues.
React useState
and setState
don’t make changes directly to the state object; they create queues to optimize performance, which is why the changes don’t update immediately.
Multiple state variables may be used and updated from within a functional component, as shown below:
\\nHere’s the code responsible for the screencast above:
\\nconst MultipleStateVars = () => {\\n const [age, setAge] = useState(19)\\n const [siblingsNum, setSiblingsNum] = \\n useState(10)\\n\\n const handleAge = () => setAge(age + 1)\\n const handleSiblingsNum = () => \\n setSiblingsNum(siblingsNum + 1)\\n\\n\\n return (\\n <div>\\n <p>Today I am {age} Years of Age</p>\\n <p>I have {siblingsNum} siblings</p>\\n\\n <div>\\n <button onClick={handleAge}>\\n Get older! \\n </button>\\n <button onClick={handleSiblingsNum}>\\n More siblings! \\n </button>\\n </div>\\n </div>\\n )\\n}\\n\\n
As opposed to strings and numbers, you could also use an object as the initial value passed to useState
.
Note that you have to pass the entire object to the useState
updater function because the object is replaced, not merged:
// 🐢 setState (object merge) vs useState (object replace)\\n// assume initial state is {name: \\"Ohans\\"}\\n\\nsetState({ age: \'unknown\' })\\n// new state object will be\\n// {name: \\"Ohans\\", age: \\"unknown\\"}\\n\\nuseStateUpdater({ age: \'unknown\' })\\n// new state object will be\\n// {age: \\"unknown\\"} - initial object is replaced\\n\\n
Multiple state objects are updated via a state object variable.
\\nHere’s the code for the screencast above:
\\nconst StateObject = () => {\\n const [state, setState] = useState({ age: 19, siblingsNum: 4 })\\n const handleClick = val =>\\n setState({\\n ...state,\\n [val]: state[val] + 1\\n })\\n const { age, siblingsNum } = state\\n\\n return (\\n <div>\\n <p>Today I am {age} Years of Age</p>\\n <p>I have {siblingsNum} siblings</p>\\n\\n <div>\\n <button onClick={handleClick.bind(null, \'age\')}>Get older!</button>\\n <button onClick={handleClick.bind(null, \'siblingsNum\')}>\\n More siblings!\\n </button>\\n </div>\\n </div>\\n )\\n}\\n\\n
As opposed to just passing an initial state value, state could also be initialized from a function, as shown below:
\\nconst StateFromFn = () => {\\n const [token] = useState(() => {\\n let token = window.localStorage.getItem(\\"my-token\\");\\n return token || \\"default#-token#\\"\\n })\\n\\n return <div>Token is {token}</div>\\n}\\n\\n
setState
The updater function returned from invoking useState
can also take a function similar to the good ol’ setState
:
const [value, updateValue] = useState(0)\\n// both forms of invoking \\"updateValue\\" below are valid 👇\\nupdateValue(1);\\nupdateValue(previousValue => previousValue + 1);\\n\\n
This is ideal when the state update depends on some previous value of state:
\\nA counter with functional setState
updates.
Here’s the code for the screencast above:
\\nconst CounterFnSetState = () => {\\n const [count, setCount] = useState(0);\\n return (\\n <>\\n <p>Count value is: {count}</p>\\n <button onClick={() => setCount(0)}>Reset</button>\\n <button \\n onClick={() => setCount(prevCount => prevCount + 1)}>\\n Plus (+)\\n </button>\\n <button \\n onClick={() => setCount(prevCount => prevCount - 1)}>\\n Minus (-)\\n </button>\\n </>\\n );\\n}\\n\\n
Here’s a live, editable useState
cheat sheet if you want to dive deeper on your own.
useEffect
With useEffect
, you invoke side effects from within functional components, which is an important concept to understand in the React Hooks era.
useEffect
for basic side effectsWatch the title of the document update.
\\nHere’s the code responsible for the screencast above:
\\nconst BasicEffect = () => {\\n const [age, setAge] = useState(0)\\n const handleClick = () => setAge(age + 1)\\n\\n useEffect(() => {\\n document.title = \'You are \' + age + \' years old!\'\\n })\\n\\n return <div>\\n <p> Look at the title of the current tab in your browser </p>\\n <button onClick={handleClick}>Update Title!! </button>\\n </div>\\n}\\n\\n
useEffect
HookIt’s pretty common to clean up an effect after some time. This is possible by returning a function from within the effect function passed to useEffect
. Below is an example with addEventListener
:
const EffectCleanup = () => {\\n useEffect(() => {\\n const clicked = () => console.log(\'window clicked\')\\n window.addEventListener(\'click\', clicked)\\n\\n // return a clean-up function\\n return () => {\\n window.removeEventListener(\'click\', clicked)\\n }\\n }, [])\\n\\n return <div>\\n When you click the window you\'ll \\n find a message logged to the console\\n </div>\\n}\\n\\n
useEffect
Hooks in a single componentMultiple useEffect
calls can happen within a functional component, as shown below:
const MultipleEffects = () => {\\n // 🍟\\n useEffect(() => {\\n const clicked = () => console.log(\'window clicked\')\\n window.addEventListener(\'click\', clicked)\\n\\n return () => {\\n window.removeEventListener(\'click\', clicked)\\n }\\n }, [])\\n\\n // 🍟 another useEffect hook \\n useEffect(() => {\\n console.log(\\"another useEffect call\\");\\n })\\n\\n return <div>\\n Check your console logs\\n </div>\\n}\\n\\n
Note thatuseEffect
calls can be skipped — i.e., not invoked on every render. This is done by passing a second array argument to the effect function.
const ArrayDepMount = () => {\\n const [randomNumber, setRandomNumber] = useState(0)\\n const [effectLogs, setEffectLogs] = useState([])\\n\\n useEffect(\\n () => {\\n setEffectLogs(prevEffectLogs => [...prevEffectLogs, \'effect fn has been invoked\'])\\n },\\n []\\n )\\n\\n return (\\n <div>\\n <h1>{randomNumber}</h1>\\n <button\\n onClick={() => {\\n setRandomNumber(Math.random())\\n }}\\n >\\n Generate random number!\\n </button>\\n <div>\\n {effectLogs.map((effect, index) => (\\n <div key={index}>{\'🍔\'.repeat(index) + effect}</div>\\n ))}\\n </div>\\n </div>\\n )\\n}\\n\\n
In the example above, useEffect
is passed an array of one value: [randomNumber]
.
Thus, the effect function will be called on mount and whenever a new random number is generated.
\\nHere’s the Generate random number button being clicked and the effect function being rerun upon generating a new random number:
\\nIn this example, useEffect
is passed an empty array, []
. Therefore, the effect function will be called only on mount:
const ArrayDepMount = () => {\\n const [randomNumber, setRandomNumber] = useState(0)\\n const [effectLogs, setEffectLogs] = useState([])\\n\\n useEffect(\\n () => {\\n setEffectLogs(prevEffectLogs => [...prevEffectLogs, \'effect fn has been invoked\'])\\n },\\n []\\n )\\n\\n return (\\n <div>\\n <h1>{randomNumber}</h1>\\n <button\\n onClick={() => {\\n setRandomNumber(Math.random())\\n }}\\n >\\n Generate random number!\\n </button>\\n <div>\\n {effectLogs.map((effect, index) => (\\n <div key={index}>{\'🍔\'.repeat(index) + effect}</div>\\n ))}\\n </div>\\n </div>\\n )\\n}\\n\\n
Here’s the button being clicked and the effect function not being invoked:
\\nWithout an array dependency, the effect function will be run after every single render.
\\n\\nuseEffect(() => {\\nconsole.log(“This will be logged after every render!”)\\n})\\n\\n
Here’s a live, editable useEffect
cheat sheet if you’d like to explore further.
useContext
useContext
saves you the stress of having to rely on a Context consumer. React Context has a simpler API when compared to MyContext.Consumer
and the render props API it exposes.
Context is React’s way of handling shared data between multiple components.
\\nThe following example highlights the difference between consuming a context object value via useContext
or Context.Consumer
:
// example Context object\\nconst ThemeContext = React.createContext(\\"dark\\");\\n\\n// usage with context Consumer\\nfunction Button() {\\n return <ThemeContext.Consumer>\\n {theme => <button className={theme}> Amazing button </button>}\\n </ThemeContext.Consumer>\\n}\\n\\n\\n// usage with useContext hook \\nimport {useContext} from \'react\';\\n\\nfunction ButtonHooks() {\\n const theme = useContext(ThemeContext)\\n return <button className={theme}>Amazing button</button>\\n}\\n\\n
Here’s a live example with useContext
:
And here’s the code responsible for the example above:
\\nconst ThemeContext = React.createContext(\'light\');\\n\\nconst Display = () => {\\n const theme = useContext(ThemeContext);\\n return <div\\n style={{\\n background: theme === \'dark\' ? \'black\' : \'papayawhip\',\\n color: theme === \'dark\' ? \'white\' : \'palevioletred\',\\n width: \'100%\',\\n minHeight: \'200px\'\\n }}\\n >\\n {\'The theme here is \' + theme}\\n </div>\\n}\\n\\n
Here’s a live, editable React Context cheat sheet if you’d like to tinker around yourself.
\\nuseLayoutEffect
useLayoutEffect
has the very same signature as useEffect
. We’ll discuss the difference between useLayoutEffect
and useEffect
below:
useLayoutEffect(() => {\\n//do something\\n}, [arrayDependency])\\n\\n
useLayoutEffect
vs. useEffect
Here’s the same example for useEffect
built with useLayoutEffect
:
And here’s the code:
\\nconst ArrayDep = () => {\\n const [randomNumber, setRandomNumber] = useState(0)\\n const [effectLogs, setEffectLogs] = useState([])\\n\\n useLayoutEffect(\\n () => {\\n setEffectLogs(prevEffectLogs => [...prevEffectLogs, \'effect fn has been invoked\'])\\n },\\n [randomNumber]\\n )\\n\\n return (\\n <div>\\n <h1>{randomNumber}</h1>\\n <button\\n onClick={() => {\\n setRandomNumber(Math.random())\\n }}\\n >\\n Generate random number!\\n </button>\\n <div>\\n {effectLogs.map((effect, index) => (\\n <div key={index}>{\'🍔\'.repeat(index) + effect}</div>\\n ))}\\n </div>\\n </div>\\n )\\n }\\n\\n
What’s the difference between useEffect
and useLayoutEffect
?
The function passed to useEffect
fires after layout and paint — i.e. after the render has been committed to the screen. This is OK for most side effects that shouldn’t block the browser from updating the screen.
However, there are cases where you may not want the behavior useEffect
provides; for example, if you need to make a visual change to the DOM as a side effect, useEffect
isn’t the best choice.
To prevent the user from seeing flickers of changes, you can use useLayoutEffect
. The function passed to useLayoutEffect
will be run before the browser updates the screen.
Here’s a live, editable useLayoutEffect
cheat sheet.
useReducer
useReducer
may be used as an alternative to useState
. It’s ideal for complex state logic where there’s a dependency on previous state values or a lot of state sub-values.
Depending on your use case, you may find useReducer
quite testable.
useReducer
basic usageAs opposed to calling useState
, call useReducer
with a reducer
and initialState
, as shown below. The useReducer
call returns the state property and a dispatch
function:
Increase/decrease bar size by managing state with useReducer
.
Here’s the code responsible for the above screencast:
\\nconst initialState = { width: 15 };\\n\\nconst reducer = (state, action) => {\\n switch (action) {\\n case \'plus\':\\n return { width: state.width + 15 }\\n case \'minus\':\\n return { width: Math.max(state.width - 15, 2) }\\n default:\\n throw new Error(\\"what\'s going on?\\" )\\n }\\n}\\n\\nconst Bar = () => {\\n const [state, dispatch] = useReducer(reducer, initialState)\\n return <>\\n <div style={{ background: \'teal\', height: \'30px\', width: state.width }}></div>\\n <div style={{marginTop: \'3rem\'}}>\\n <button onClick={() => dispatch(\'plus\')}>Increase bar size</button>\\n <button onClick={() => dispatch(\'minus\')}>Decrease bar size</button>\\n </div>\\n </>\\n}\\n\\nReactDOM.render(<Bar />)\\n\\n
useReducer
takes a third function parameter. You may initialize state from this function, and whatever’s returned from this function is returned as the state object. This function will be called with initialState
— the second parameter:
Same increase/decrease bar size, with state initialized lazily.
\\nHere’s the code for the example above:
\\nconst initializeState = () => ({\\n width: 100\\n})\\n\\n// ✅ note how the value returned from the fn above overrides initialState below: \\nconst initialState = { width: 15 }\\nconst reducer = (state, action) => {\\n switch (action) {\\n case \'plus\':\\n return { width: state.width + 15 }\\n case \'minus\':\\n return { width: Math.max(state.width - 15, 2) }\\n default:\\n throw new Error(\\"what\'s going on?\\" )\\n }\\n}\\n\\nconst Bar = () => {\\n const [state, dispatch] = useReducer(reducer, initialState, initializeState)\\n return <>\\n <div style={{ background: \'teal\', height: \'30px\', width: state.width }}></div>\\n <div style={{marginTop: \'3rem\'}}>\\n <button onClick={() => dispatch(\'plus\')}>Increase bar size</button>\\n <button onClick={() => dispatch(\'minus\')}>Decrease bar size</button>\\n </div>\\n </>\\n}\\n\\nReactDOM.render(Bar)\\n\\n
this.setState
useReducer uses a reducer that isn’t as strict as Redux’s. For example, the second parameter passed to the reducer, action
, doesn’t need to have a type
property.
This allows for interesting manipulations, such as renaming the second parameter and doing the following:
\\nconst initialState = { width: 15 }; \\n\\nconst reducer = (state, newState) => ({\\n ...state,\\n width: newState.width\\n})\\n\\nconst Bar = () => {\\n const [state, setState] = useReducer(reducer, initialState)\\n return <>\\n <div style={{ background: \'teal\', height: \'30px\', width: state.width }}></div>\\n <div style={{marginTop: \'3rem\'}}>\\n <button onClick={() => setState({width: 100})}>Increase bar size</button>\\n <button onClick={() => setState({width: 3})}>Decrease bar size</button>\\n </div>\\n </>\\n}\\n\\nReactDOM.render(Bar)\\n\\n
The results remain the same with a setState
-like API imitated.
Here’s an editable useReducer
cheat sheet. And here’s a comprehensive guide to the hook if you’re looking for more information.
useCallback
useCallback
returns a memoized callback. Wrapping a component with React.Memo()
signals the intent to reuse code. This does not automatically extend to functions passed as parameters.
React saves a reference to the function when wrapped with useCallback
. Pass this reference as a property to new components to reduce rendering time.
useCallback
exampleThe following example will form the basis of the explanations and code snippets that follow:
\\nHere’s the code:
\\nconst App = () => {\\n const [age, setAge] = useState(99)\\n const handleClick = () => setAge(age + 1)\\n const someValue = \\"someValue\\"\\n const doSomething = () => {\\n return someValue\\n }\\n\\n return (\\n <div>\\n <Age age={age} handleClick={handleClick}/>\\n <Instructions doSomething={doSomething} />\\n </div>\\n )\\n}\\n\\nconst Age = ({ age, handleClick }) => {\\n return (\\n <div>\\n <div style={{ border: \'2px\', background: \\"papayawhip\\", padding: \\"1rem\\" }}>\\n Today I am {age} Years of Age\\n </div>\\n <pre> - click the button below 👇 </pre>\\n <button onClick={handleClick}>Get older! </button>\\n </div>\\n )\\n}\\n\\nconst Instructions = React.memo((props) => {\\n return (\\n <div style={{ background: \'black\', color: \'yellow\', padding: \\"1rem\\" }}>\\n <p>Follow the instructions above as closely as possible</p>\\n </div>\\n )\\n})\\n\\nReactDOM.render (\\n <App />\\n)\\n\\n
In the example above, the parent component, <Age />
, is updated (and re-rendered) whenever the Get older button is clicked.
Consequently, the <Instructions />
child component is also re-rendered because the doSomething
prop is passed a new callback with a new reference.
Note that even though the Instructions
child component uses React.memo
to optimize performance, it is still re-rendered.
How can this be fixed to prevent <Instructions />
from re-rendering needlessly?
useCallback
with referenced functionconst App = () => {\\n const [age, setAge] = useState(99)\\n const handleClick = () => setAge(age + 1)\\n const someValue = \\"someValue\\"\\n const doSomething = useCallback(() => {\\n return someValue\\n }, [someValue])\\n\\n return (\\n <div>\\n <Age age={age} handleClick={handleClick} />\\n <Instructions doSomething={doSomething} />\\n </div>\\n )\\n}\\n\\nconst Age = ({ age, handleClick }) => {\\n return (\\n <div>\\n <div style={{ border: \'2px\', background: \\"papayawhip\\", padding: \\"1rem\\" }}>\\n Today I am {age} Years of Age\\n </div>\\n <pre> - click the button below 👇 </pre>\\n <button onClick={handleClick}>Get older! </button>\\n </div>\\n )\\n}\\n\\nconst Instructions = React.memo((props) => {\\n return (\\n <div style={{ background: \'black\', color: \'yellow\', padding: \\"1rem\\" }}>\\n <p>Follow the instructions above as closely as possible</p>\\n </div>\\n )\\n})\\n\\nReactDOM.render(<App />)\\n\\n
useCallback
with inline functionuseCallback
also works with an inline function as well. Here’s the same solution with an inline useCallback
call:
const App = () => {\\n const [age, setAge] = useState(99)\\n const handleClick = () => setAge(age + 1)\\n const someValue = \\"someValue\\"\\n\\n return (\\n <div>\\n <Age age={age} handleClick={handleClick} />\\n <Instructions doSomething={useCallback(() => {\\n return someValue\\n }, [someValue])} />\\n </div>\\n )\\n}\\n\\nconst Age = ({ age, handleClick }) => {\\n return (\\n <div>\\n <div style={{ border: \'2px\', background: \\"papayawhip\\", padding: \\"1rem\\" }}>\\n Today I am {age} Years of Age\\n </div>\\n <pre> - click the button below 👇 </pre>\\n <button onClick={handleClick}>Get older! </button>\\n </div>\\n )\\n}\\n\\nconst Instructions = memo((props) => {\\n return (\\n <div style={{ background: \'black\', color: \'yellow\', padding: \\"1rem\\" }}>\\n <p>Follow the instructions above as closely as possible</p>\\n </div>\\n )\\n})\\n\\nrender(<App />)\\n\\n
Here’s a live, editable useCallback
cheat sheet.
useMemo
The useMemo
function returns a memoized value. useMemo
is different from useCallback
in that it internalizes return values instead of entire functions. Rather than passing a handle to the same function, React skips the function and returns the previous result, until the parameters change.
This allows you to avoid repeatedly performing potentially costly operations until necessary. Use this method with care, as any changing variables defined in the function do not affect the behavior of useMemo
. If you’re performing timestamp additions, for example, this method does not care that the time changes, only that the function parameters differ.
useMemo
exampleThe following example will form the basis of the explanations and code snippets that follow:
\\nHere’s the code responsible for the screenshot above:
\\nconst App = () => {\\n const [age, setAge] = useState(99)\\n const handleClick = () => setAge(age + 1)\\n const someValue = { value: \\"someValue\\" }\\n const doSomething = () => {\\n return someValue\\n }\\n\\n return (\\n <div>\\n <Age age={age} handleClick={handleClick}/>\\n <Instructions doSomething={doSomething} />\\n </div>\\n )\\n}\\n\\nconst Age = ({ age, handleClick }) => {\\n return (\\n <div>\\n <div style={{ border: \'2px\', background: \\"papayawhip\\", padding: \\"1rem\\" }}>\\n Today I am {age} Years of Age\\n </div>\\n <pre> - click the button below 👇 </pre>\\n <button onClick={handleClick}>Get older! </button>\\n </div>\\n )\\n}\\n\\nconst Instructions = React.memo((props) => {\\n return (\\n <div style={{ background: \'black\', color: \'yellow\', padding: \\"1rem\\" }}>\\n <p>Follow the instructions above as closely as possible</p>\\n </div>\\n )\\n})\\n\\nReactDOM.render (\\n <App />\\n)\\n\\n
The example above is similar to the one for useCallback
. The only difference here is that someValue
is an object, not a string. Owing to this, the Instructions
component still re-renders despite the use of React.memo
.
Why? Objects are compared by reference and the reference to someValue
changes whenever <App />
re-renders.
Any solutions?
\\nThe object someValue
may be memoized using useMemo
. This prevents unnecessary re-rendering:
const App = () => {\\n const [age, setAge] = useState(99)\\n const handleClick = () => setAge(age + 1)\\n const someValue = useMemo(() => ({ value: \\"someValue\\" }))\\n const doSomething = () => {\\n return someValue\\n }\\n\\n return (\\n <div>\\n <Age age={age} handleClick={handleClick}/>\\n <Instructions doSomething={doSomething} />\\n </div>\\n )\\n}\\n\\nconst Age = ({ age, handleClick }) => {\\n return (\\n <div>\\n <div style={{ border: \'2px\', background: \\"papayawhip\\", padding: \\"1rem\\" }}>\\n Today I am {age} Years of Age\\n </div>\\n <pre> - click the button below 👇 </pre>\\n <button onClick={handleClick}>Get older! </button>\\n </div>\\n )\\n}\\n\\nconst Instructions = React.memo((props) => {\\n return (\\n <div style={{ background: \'black\', color: \'yellow\', padding: \\"1rem\\" }}>\\n <p>Follow the instructions above as closely as possible</p>\\n </div>\\n )\\n})\\n\\nReactDOM.render (<App />)\\n\\n
Here’s a live, editable useMemo
demo.
useRef
useRef
returns a “ref” object. Values are accessed from the .current
property of the returned object. The .current
property could be initialized to an initial value — useRef(initialValue)
, for example. The object is persisted for the entire lifetime of the component.
Learn more in this comprehensive useRefs
guide or check out our useRefs
video tutorial:
Consider the sample application below:
\\nAccessing the DOM via useRef
.
Here’s the code responsible for the screencast above:
\\nconst AccessDOM = () => {\\n const textAreaEl = useRef(null);\\n const handleBtnClick = () => {\\n textAreaEl.current.value =\\n \\"The is the story of your life. You are an human being, and you\'re on a website about React Hooks\\";\\n textAreaEl.current.focus();\\n };\\n return (\\n <section style={{ textAlign: \\"center\\" }}>\\n <div>\\n <button onClick={handleBtnClick}>Focus and Populate Text Field</button>\\n </div>\\n <label\\n htmlFor=\\"story\\"\\n style={{\\n display: \\"block\\",\\n background: \\"olive\\",\\n margin: \\"1em\\",\\n padding: \\"1em\\"\\n }}\\n >\\n The input box below will be focused and populated with some text\\n (imperatively) upon clicking the button above.\\n </label>\\n <textarea ref={textAreaEl} id=\\"story\\" rows=\\"5\\" cols=\\"33\\" />\\n </section>\\n );\\n};\\n\\n
Other than just holding DOM refs, the “ref” object can hold any value. Consider a similar application below, where the ref object holds a string value:
\\nHere’s the code:
\\nconst HoldStringVal = () => {\\n const textAreaEl = useRef(null);\\n const stringVal = useRef(\\"This is a string saved via the ref object --- \\")\\n const handleBtnClick = () => {\\n textAreaEl.current.value =\\n stringVal.current + \\"The is the story of your life. You are an human being, and you\'re on a website about React Hooks\\";\\n textAreaEl.current.focus();\\n };\\n return (\\n <section style={{ textAlign: \\"center\\" }}>\\n <div>\\n <button onClick={handleBtnClick}>Focus and Populate Text Field</button>\\n </div>\\n <label\\n htmlFor=\\"story\\"\\n style={{\\n display: \\"block\\",\\n background: \\"olive\\",\\n margin: \\"1em\\",\\n padding: \\"1em\\"\\n }}\\n >\\n Prepare to see text from the ref object here. Click button above.\\n </label>\\n <textarea ref={textAreaEl} id=\\"story\\" rows=\\"5\\" cols=\\"33\\" />\\n </section>\\n );\\n };\\n\\n
You could do the same as storing the return value from a setInterval
for cleanup:
function TimerWithRefID() {\\n const setIntervalRef = useRef();\\n\\n useEffect(() => {\\n const intervalID = setInterval(() => {\\n // something to be done every 100ms\\n }, 100);\\n\\n // this is where the interval ID is saved in the ref object \\n setIntervalRef.current = intervalID;\\n return () => {\\n clearInterval(setIntervalRef.current);\\n };\\n });\\n}\\n\\n
Working on a near-real-world example can help bring your knowledge of Hooks to life. You can fetch data via Hooks for more Hooks practice, though React Suspense is the current recommended method for handling asynchronous operations.
\\nBelow is an example of fetching data with a loading indicator:
\\nThe code appears below:
\\nconst fetchData = () => {\\n const stringifyData = data => JSON.stringify(data, null, 2)\\n const initialData = stringifyData({ data: null })\\n const loadingData = stringifyData({ data: \'loading...\' })\\n const [data, setData] = useState(initialData)\\n\\n const [gender, setGender] = useState(\'female\')\\n const [loading, setLoading] = useState(false)\\n\\n useEffect(\\n () => {\\n const fetchData = () => {\\n setLoading(true)\\n const uri = \'https://randomuser.me/api/?gender=\' + gender\\n fetch(uri)\\n .then(res => res.json())\\n .then(({ results }) => {\\n setLoading(false)\\n const { name, gender, dob } = results[0]\\n const dataVal = stringifyData({\\n ...name,\\n gender,\\n age: dob.age\\n })\\n setData(dataVal)\\n })\\n }\\n\\n fetchData()\\n },\\n [gender]\\n )\\n\\n return (\\n <>\\n <button\\n onClick={() => setGender(\'male\')}\\n style={{ outline: gender === \'male\' ? \'1px solid\' : 0 }}\\n >\\n Fetch Male User\\n </button>\\n <button\\n onClick={() => setGender(\'female\')}\\n style={{ outline: gender === \'female\' ? \'1px solid\' : 0 }}\\n >\\n Fetch Female User\\n </button>\\n\\n <section>\\n {loading ? <pre>{loadingData}</pre> : <pre>{data}</pre>}\\n </section>\\n </>\\n )\\n}\\n\\n
Here’s a live, editable useRef
cheat sheet.
useTransition
The key to understanding the useTransition
Hook is that it prioritizes state change. By default, any state change in React is given a high priority. However, when you transition a state change (maybe because of heavy computation), you’re telling React to give that state change a lower priority, meaning all other state changes would run and render on the screen before the transitioned state change would run.
Marking a state as transition is as simple as passing a synchronous function with the state you want to transition into the startTransition
function returned by the useTransition
Hook:
import { useTransition } from \'react\';\\n\\nconst App =()=>{\\n const [timeUpdate, setTimeUpdate] = useState(2)\\n const [isPending, startTransition] = useTransition()\\n\\n startTransition(()=>{\\n // handle state change in here\\n })\\n\\n }\\n\\n
The isPending
flag returns true
or false
indicating whether or not there is a pending transition, and we use the startTransition
function to mark a state change as a transition.
useTransition
and regular state updateState updates placed inside the useTransition
Hook are given a low priority, while regular state updates are given a higher priority. So think of useTransition
as a React Hook that lets you update the state without blocking the UI.
Let’s take a look at an example.
\\nI have created a CodeSandbox that makes two state updates:
\\ntextInput
state upon user inputlistItems
state with the currently entered user inputReact
has a mechanism called “batching” that allows it to combine multiple state changes into a single update to the component’s state.
When you call setState
in a React component, React does not immediately update the component’s state. Instead, it schedules a state update to be processed later. If you call setState
multiple times within the same event loop, React will batch these updates together into a single update before applying them to the component’s state and triggering a re-render.
That’s why, in this example, our setTextInput
state doesn’t trigger a re-render until after we’re done looping and updating the setListItems
state then a render is triggered. This makes our application act a bit sluggish.
Now, let’s look at the same example but this time, we’ll transition the state change that has heavy computation.
\\nAs we can see in this CodeSandbox, there’s a significant improvement in our application. In this example, we’re telling react
to give setListItems
state update a lower priority seeing as it requires a heavy computation. This means that setTextInput
state would trigger a re-render upon state change and not have to be batched with the setListItem
state change.
N.B., if a state update causes a component to suspend, that state update should be wrapped in a transition
\\nuseDeferredValue
The useDeferredValue
Hook was a new addition to React 18, and it offers developers a powerful new tool for optimizing their applications. This hook allows you to defer the rendering of a value until a future point in time, which can be incredibly useful in situations where you want to avoid unnecessary rendering.
Here’s the sample syntax code:
\\nimport { useDeferredValue } from \'react\'\\n\\nconst App =()=>{\\nconst [valueToDefer, setValueToDefer] = useState(\\"\\")\\nconst deferredValue = useDeferredValue(valueToDefer)\\n\\nreturn (\\n <p>{deferredValue}</p>\\n )\\n}\\n\\n
All we have to do is pass the value we want to defer into the useDeferredValue
Hook.
One of the most common use cases for the useDeferredValue
Hook is when you have a large number of updates occurring at once. For example, imagine you have a search bar in your application that updates in real time as the user types. If the user is a fast typer, this could result in dozens, or even hundreds, of updates occurring in rapid succession. Without any optimization, this could cause your application to slow down.
By using the useDeferredValue
Hook, you can avoid this problem by deferring the rendering of the search results until the user stops typing. This is similar to how debouncing works; it can dramatically improve performance.
Let’s demonstrate this use case with an example:
\\nconst Search =()=> {\\n const [searchQuery, setSearchQuery] = useState(\'\');\\n const [searchResults, setSearchResults] = useState([]);\\n\\n const deferredSearchQuery = useDeferredValue(searchQuery);\\n\\n useEffect(() => {\\n // Fetch search results using deferredSearchQuery\\n // Update setSearchResults with the new results\\n }, [deferredSearchQuery]);\\n\\n const handleSearchInputChange = (event) => {\\n setSearchQuery(event.target.value);\\n };\\n\\n return (\\n <div>\\n <input type=\\"text\\" value={searchQuery} onChange={handleSearchInputChange} />\\n <ul>\\n {searchResults.map((result) => (\\n <li key={result.id}>{result.title}</li>\\n ))}\\n </ul>\\n </div>\\n );\\n}\\n\\n
Here, we’re using the useDeferredValue
Hook to defer the rendering of the search results until after the user stops typing in the search bar. This helps to reduce unnecessary re-renders and improve performance.
useId
useId
is a React Hook that is used to generate unique IDs. This can be valuable in a number of scenarios, such as generating unique IDs for accessibility attributes.
Here’s the sample syntax code:
\\nimport { useId } from \'react\'\\n\\nconst App =()=>{\\nconst id = useId()\\n\\nreturn (\\n <input type=\\"text\\" id={id} />\\n )\\n}\\n\\n
Now, let’s look at a use case. Here’s an example of a scenario using the useId
Hook with a TextField
component:
const TextField =()=>{\\n return(\\n <>\\n <label htmlFor=\\"name\\" /> \\n <input type=\\"text\\" id=\\"name\\"/>\\n </>\\n )\\n}\\n\\n
We’ll use the TextField
component a couple of times in our App
component below:
const App=()=>{\\n return (\\n <div className=\\"inputs\\">\\n <TextField />\\n <TextField />\\n </div>\\n )\\n}\\n\\n
To link a label
element to an input
field, we use the id
and htmlFor
attribute. This will cause the browser to associate a particular label
element with a particular input
field. If we were working with plain HTML
, this wouldn’t be necessary — instead, we could simply duplicate the elements and change the attributes.
However, in our example above, we created a reusable TextField
component and we’re using this component twice in our App
component. Since the attributes on the element in the TextField
are static, every time we render the component, the attributes remain the same.
We can fix this by using the useId
Hook. Let’s modify the TextField
component, like so:
const TextField =()=>{\\nconst id = useId();\\n return(\\n <>\\n <label htmlFor={id} /> \\n <input type=\\"text\\" id={id}/>\\n </>\\n )\\n}\\n\\n
Now, every time we call the TextInput
component, a unique ID will be associated with the elements that are rendered.
useImperativeHandle
When building reusable components, you sometimes need to expose certain functionality to parent components. The traditional way of doing this with refs can expose too much of your component’s internal workings. That’s where useImperativeHandle
comes in — this hook lets you customize exactly what gets exposed to parent components when they use a ref.
Think of it like building a remote control for your TV. Sure, your TV has tons of internal circuits and components, but you only want to expose a few specific buttons on the remote. useImperativeHandle
lets you build that custom remote control interface for your components.
Let’s look at a practical example with a video player component:
\\nconst VideoPlayer = forwardRef((props, ref) => {\\n const videoRef = useRef();\\n\\n useImperativeHandle(ref, () => ({\\n // Only expose the methods we want parents to use\\n play() {\\n videoRef.current.play();\\n },\\n pause() {\\n videoRef.current.pause();\\n },\\n setPlaybackRate(rate) {\\n videoRef.current.playbackRate = rate;\\n },\\n // Notice we\'re not exposing things like volume, currentTime, or the video element itself\\n }));\\n\\n return (\\n <div className=\\"video-wrapper\\">\\n <video \\n ref={videoRef}\\n src={props.src}\\n width=\\"100%\\"\\n controls\\n />\\n {/* We could have internal controls here that use videoRef directly */}\\n </div>\\n );\\n});\\n\\n// Using the component:\\nconst App = () => {\\n const playerRef = useRef();\\n\\n return (\\n <div>\\n <VideoPlayer \\n ref={playerRef}\\n src=\\"/awesome-video.mp4\\" \\n />\\n <div className=\\"custom-controls\\">\\n <button onClick={() => playerRef.current.play()}>\\n Play\\n </button>\\n <button onClick={() => playerRef.current.setPlaybackRate(2)}>\\n 2x Speed\\n </button>\\n </div>\\n </div>\\n );\\n};\\n\\n
Instead of exposing the entire video element, we’re only exposing the methods we want the parent to access. This gives us better encapsulation and a cleaner API. Think of it like creating a custom remote control — you only add the buttons that make sense for your users!
\\nuseFormStatus
Forms in React recently got a lot more interesting with the useFormStatus
Hook. It’s like having a backstage pass to what’s happening with your form submissions. While libraries like react-hook-form have done a great job handling form validation and state management, they don’t have direct access to React’s internal form submission pipeline. That’s where useFormStatus
comes in — it’s built directly into React and gives you real-time insights into form submissions happening through React’s native form actions.
So, want to show a spinner while the form is submitting? useFormStatus
’s got you covered! For a deep dive into useFormStatus
, check out our comprehensive article.
The key difference between react-hook-form is that useFormStatus
is specifically designed to work with React’s server actions and the new forms paradigm. While react-hook-form excels at client-side form management (validation, field arrays, form state), useFormStatus
is more focused on the submission process itself and integrates seamlessly with React’s server components and actions.
Here’s what makes useFormStatus
special:
Here’s a practical example:
\\nimport { useFormStatus } from \'react-dom\';\\n\\nconst Form = () => {\\n const { pending } = useFormStatus();\\n\\n async function handleSubmit(formData) {\\n await submitToServer(formData);\\n }\\n\\n return (\\n <form action={handleSubmit}>\\n <input name=\\"email\\" type=\\"email\\" />\\n <button disabled={pending}>\\n {pending ? \'Submitting...\' : \'Submit\'}\\n </button>\\n </form>\\n );\\n};\\n\\n
The neat thing here is how useFormStatus
handles all the loading state automatically. Your button just knows when the form is submitting!
useOptimistic
The useOptimistic
Hook provides a powerful solution for implementing optimistic updates in React applications. Optimistic updates allow you to temporarily update the UI immediately in response to a user action, before the server response arrives, creating a more responsive user experience.
This pattern is particularly valuable in applications where network requests might take some time, but you can reasonably predict the server’s response. Common use cases include social media interactions (likes, follows), to-do lists, or any interactive features where immediate feedback enhances the user experience.
\\nHere’s how the Hook works: you provide your current state and a function that describes how to update that state optimistically. The Hook returns both the optimistic state (which might be temporarily ahead of the real state) and a function to trigger optimistic updates.
\\nFor a comprehensive exploration of optimistic UI patterns, check out our detailed guide on implementing optimistic updates. Here’s a practical example:
\\nconst TodoList = () => {\\n const [todos, setTodos] = useState([\'Buy milk\', \'Walk dog\']);\\n const [optimisticTodos, addOptimisticTodo] = useOptimistic(\\n todos,\\n (state, newTodo) => [...state, newTodo]\\n );\\n\\n const addTodo = async (newTodo) => {\\n // Immediately update the UI\\n addOptimisticTodo(newTodo);\\n\\n try {\\n // Perform the actual server request\\n const response = await saveTodoToServer(newTodo);\\n // Update the real state with the server response\\n setTodos(prev => [...prev, response.todo]);\\n } catch (error) {\\n // Handle the error case appropriately\\n console.error(\'Failed to save todo:\', error);\\n // You might want to show an error notification here\\n toast.error(\'Failed to add todo. Please try again.\');\\n }\\n };\\n\\n return (\\n <div className=\\"todo-list\\">\\n <ul>\\n {optimisticTodos.map((todo, index) => (\\n <li key={index} className=\\"todo-item\\">\\n {todo}\\n </li>\\n ))}\\n </ul>\\n <button \\n onClick={() => addTodo(\'New todo \' + Date.now())}\\n className=\\"add-button\\"\\n >\\n Add Todo\\n </button>\\n </div>\\n );\\n};\\n\\n
The key benefits of using useOptimistic
include:
Remember that while optimistic updates can significantly improve the perceived performance of your application, they should be used judiciously. Consider the likelihood of the operation succeeding and the impact of failure on the user experience when deciding whether to implement optimistic updates.
\\nReact Hooks give a lot of power to functional components. I hope this cheat sheet proves useful in your day-to-day use of React Hooks:
\\nCheers!
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nfor
loops\\n getEnumKeys()
for reusable enum key extraction\\n Enums are ubiquitous in modern programming. They are normally widely used to model categories of things, such as the possible states of a traffic light, the days of the week, and the months of the year.
\\nOne of the advantages of enums is that they enable us to map short lists of values into numbers, making it easier to compare and work with them in general. Enums in TypeScript have evolved from simple mappings of names to numbers to more advanced structures that can include methods and parameters.
\\nIn this article, we’ll explore different approaches for iterating over enums in TypeScript.
\\nEditor’s note: This article was last reviewed and updated in June 2025. The update includes clear, practical code examples for iterating over numeric and string enums using Object.keys()
, Object.values()
, and Object.entries()
. It also clarifies which enum types can be iterated at runtime, adds a comparison of string vs. numeric enums, introduces a reusable getEnumKeys()
utility function, and links to relevant community resources for further reading.
If you need to extract numeric keys or numeric values from an object in JavaScript, here are three common approaches:
\\nUse Object.keys()
combined with filtering to get keys that represent numbers.
Use Object.values()
and filter for values that are numbers.
Use Object.entries()
to filter both keys and values based on whether they are numeric.
Here’s a quick code snippet illustrating each method:
\\n// Option 1: Get numeric keys only (keys as strings, filter those that convert to numbers)\\nconst numericKeys = Object.keys(obj).filter(key => !isNaN(Number(key)));\\n\\n// Option 2: Get numeric values only\\nconst numericValues = Object.values(obj).filter(value => typeof value === \'number\');\\n\\n// Option 3: Get entries with numeric keys or numeric values (custom filter example)\\nconst numericEntries = Object.entries(obj).filter(\\n([key, value]) => !isNaN(Number(key)) || typeof value === \'number\'\\n);\\n
TypeScript enums are simple objects. For example:
\\nenum TrafficLight {\\n Green = 1,\\n Yellow,\\n Red\\n}\\n\\n
In the definition above, Green
is mapped to the number 1
. The subsequent members are mapped to auto-incremented integers. Hence, Yellow
is mapped to 2
, and Red
to 3
.
If we didn’t specify the mapping Green = 1
, TypeScript would pick 0
as a starting index.
Sometimes, we want to iterate over a TypeScript enum — for instance, to perform certain actions for every element in the enum, such as rendering options in a UI or validating input values.
\\nHowever, enums in TypeScript can be either numeric or string-based, and this distinction impacts how we can iterate over them effectively.
\\nBefore diving into iteration techniques, it’s important to understand which methods work best depending on the enum type you’re working with.
\\nEnum Type | \\nDescription | \\nSuitable Iteration Methods | \\n
---|---|---|
Numeric Enums | \\nMap keys to numeric values, often with reverse mapping from values back to keys | \\nObject.keys() with filtering, Object.entries() with filtering | \\n
String Enums | \\nMap keys to string values, no reverse mapping | \\nObject.values() filtering by type, Object.entries() filtering | \\n
Numeric enums create a reverse mapping that includes both keys and numeric values, which requires filtering to avoid duplicate entries. String enums are simpler objects without reverse mappings, so filtering by type is usually enough.
\\nIn the following sections, we’ll explore four common iteration techniques, and clarify which work best for numeric vs. string enums.
\\nYou can iterate over enums in TypeScript using one of the following approaches, depending on the enum type and your needs:
\\nThe simplest way to iterate over an enum in TypeScript is to convert it to an array using the inbuilt Object.keys()
and Object.values()
methods. The former returns an array containing the keys of the enum object, and the latter returns an array of the enum’s values.
The following code snippet shows how to use the inbuilt object method to list the keys and values of an enum:
\\nconst keys = Object.keys(TrafficLight)\\n\\nkeys.forEach((key, index) => {\\n console.log(`${key} has index ${index}`)\\n})\\n\\n
The example above prints the following:
\\n\\"1 has index 0\\"\\n\\"2 has index 1\\"\\n\\"3 has index 2\\"\\n\\"Green has index 3\\"\\n\\"Yellow has index 4\\"\\n\\"Red has index 5\\"\\n\\n
See how the numeric keys appear first? This happens because numeric enums generate a reverse mapping — TypeScript compiles the enum to an object that contains both the forward mapping (key to value) and reverse mapping (value to key). The numeric keys “1”, “2”, “3” correspond to this reverse mapping.
\\nIf we want to only list the string keys, we’ll have to filter out the numeric ones:
\\nconst stringKeys = Object\\n .keys(TrafficLight)\\n .filter((v) => isNaN(Number(v)))\\n\\nstringKeys.forEach((key, index) => {\\n console.log(`${key} has index ${index}`)\\n})\\n\\n
In this case, the snippet above prints the following:
\\n\\"Green has index 0\\"\\n\\"Yellow has index 1\\"\\n\\"Red has index 2\\"\\n\\n
From the output above, we can see that the index
parameter has nothing to do with the actual numeric value in the enum. In fact, it is just the index of the key in the array returned by Object.keys()
.
Similarly, we can iterate over the enum values:
\\nconst values = Object.values(TrafficLight)\\n\\nvalues.forEach((value) => {\\n console.log(value)\\n})\\n\\n
Again, the snippet above prints both string and numeric values:
\\n\\"Green\\"\\n \\"Yellow\\"\\n \\"Red\\"\\n1\\n2\\n3\\n\\n
Let’s say we’re interested in the numeric values, not in the string ones. We can filter the latter out similar to before, using .filter((v) => !isNaN(Number(v)))
.
It’s worth noting that we have to filter the values only because we’re dealing with numeric enums. If we had assigned a string value to the members of our enumeration, we wouldn’t have to filter out numeric keys and values:
\\nenum TrafficLight {\\n Green = \\"G\\",\\n Yellow = \\"Y\\",\\n Red = \\"R\\"\\n}\\n\\nObject.keys(TrafficLight).forEach((key, index) => {\\n console.log(`${key} has index ${index}`)\\n})\\n\\nObject.values(TrafficLight).forEach((value) => {\\n console.log(value)\\n})\\n\\n
The snippet above prints what follows, where the first three lines are from the first forEach
loop and the last three lines are from the second forEach
loop:
\\"Green has index 0\\"\\n\\"Yellow has index 1\\"\\n\\"Red has index 2\\"\\n\\"G\\"\\n\\"Y\\"\\n\\"R\\"\\n\\n
String enums are very useful, as they are more human-readable than numeric ones. We can also mix numeric and string enums, although it is not advisable to do so.
\\nUsing the Object.keys()
and Object.values()
methods to iterate over the members of enums is a simple solution. Nonetheless, it is not very type-safe, as TypeScript returns keys and values as strings or numbers, thus not preserving the enum typing.
Before we explore other options for iterating over enums, let’s talk about reverse mapping in TypeScript’s numeric enums.
\\nReverse mapping is a TypeScript feature that compiles numeric enums to objects with both a name → value
property assignment and a value → name
property assignment.
See the following numeric enum:
\\nenum Drinks {\\n WATER = 1,\\n SODA,\\n JUICE\\n}\\n\\n
This enum would be compiled in TypeScript to an object with a form similar to the following:
\\nconst drinks = {\\n \'1\': \'WATER\',\\n \'2\': \'SODA\',\\n \'3\': \'JUICE\',\\n \'WATER\': \'1\',\\n \'SODA\': \'2\',\\n \'JUICE\': \'3\'\\n}\\n\\n
This is useful for providing human-readable references to enum values and making debugging easier.
\\nIt’s also important to note that TypeScript only provides this functionality to numeric enums.
\\n\\nfor
loopsInstead of relying on Object.keys()
and Object.values()
, another approach is to use for
loops to iterate over the keys and then use reverse mapping to get the enum values.
TypeScript offers three different kinds of for loop statements, including the for..in
and for..of
loops, which can be used to iterate an enum.
The for..in
statement loops through the keys of the enum; this works on both numeric and string enums. As mentioned earlier, numerical enums return both the defined keys and the assigned numerical value from reverse mapping so iterating over the defined keys alone will require some additional logic. With string enums, it will only loop through the defined keys. You can think of using for..in
as a loop of what Object.keys
returns.
On the other hand, the for..of
statement cannot be run directly on an enum like we run for..in
— it requires some additional logic to iterate an enum.
Let’s look at a few examples.
\\nFor..in
loop through numeric enumsenum TrafficLight {\\n Green,\\n Yellow,\\n Red\\n}\\n\\nfor (const tl in TrafficLight) {\\n const value = TrafficLight[tl]\\n\\n if (typeof value === \\"string\\") {\\n console.log(`Value: ${TrafficLight[tl]}`)\\n }\\n}\\n\\n
The script above will print the following:
\\n\\"Value: Green\\"\\n\\"Value: Yellow\\"\\n\\"Value: Red\\"\\n\\n
Notice that, in the example above, we filtered out the numeric values. This way, we can extract the member names of our enum. If we wanted to fetch them, instead of the string values, we could use a different guard in the if
statement: typeof value !==
\\"string\\"
.
For..in
loop through string enumsenum TrafficLight {\\n Green = \\"G\\",\\n Yellow = \\"Y\\",\\n Red = \\"R\\"\\n}\\n\\nfor (const tl in TrafficLight) {\\n console.log(`Value: ${TrafficLight[tl]}`)\\n}\\n\\n
The script above will print the following:
\\nValue: G\\nValue: Y\\nValue: R\\n\\n
With this example, we didn’t need the typeof value === \\"string\\"
condition check because string enums don’t have reverse mapping and won’t return any numerical keys. Our example logs our enum’s assigned values but if we wanted the keys, we could log the iterator tl
instead.
For..of
loop for enumsThe For..of
loop statement can be used to loop through a JavaScript iterable object. If we tried to use a for..of
loop directly on an enum as we did for..in
, we would get an error:
enum TrafficLight {\\n Green = \\"G\\",\\n Yellow = \\"Y\\",\\n Red = \\"R\\"\\n}\\n\\nfor (const tl of TrafficLight) {\\n console.log(`Value: ${tl}`)\\n}\\n\\n
If we tried to run the above block, it would throw Type \'typeof TrafficLight\' is not an array type or a string type.
This is because enums are not iterable.
To fix this, we have to provide the for..of
statement an iterable to loop through. An option would be to extract the keys of our enum using Object.keys()
for the for..of
loop to use:
enum TrafficLight {\\n Green = \\"G\\",\\n Yellow = \\"Y\\",\\n Red = \\"R\\"\\n}\\n\\nfor (const tl of Object.keys(TrafficLight)) {\\n console.log(`Value: ${TrafficLight[tl]}`)\\n}\\n\\n
The script above will print:
\\nValue: G\\nValue: Y\\nValue: R\\n\\n
Earlier, we talked about numeric enums being reverse-mapped. So while our above snippet works as expected for string enums, it will require a little tweaking for numeric enums as it did with the for..in
loops:
enum TrafficLight {\\n Green = 1,\\n Yellow,\\n Red\\n}\\n\\nfor (const tl of Object.keys(TrafficLight)) {\\n const value = TrafficLight[tl]\\n\\n if (typeof value === \\"string\\") {\\n console.log(`Value: ${TrafficLight[tl]}`)\\n }\\n}\\n\\n
It will give us the following:
\\nValue: Green\\nValue: Yellow\\nValue: Red\\n\\n
\\nYou could also choose to use the
for..of
loop on the values of the enum instead.
One of the benefits of defining an enum is that we provide an object with a set of limited constants.
\\nWe have explored several options to iterate an enum by converting it to an iterable object using Object.keys()
or Object.values
; however, with these options, we lose the strict typing of our enum to the specified keys alone because Object.keys()
returns an array of strings.
To fix this, we’ll have to explicitly inform TypeScript about the type of our enum’s keys:
\\nenum TrafficLight {\\n Green = \\"G\\",\\n Yellow = \\"Y\\",\\n Red = \\"R\\"\\n}\\n\\nfunction enumKeys<O extends object, K extends keyof O = keyof O>(obj: O): K[] {\\n return Object.keys(obj).filter(k => Number.isNaN(k)) as K[]\\n}\\n\\n
In the above code, the enumKeys
function simply extracts the keys of the enum and returns them as an array. With this, the return type of enumKeys(TrafficLight)
is (\\"Green\\" | \\"Yellow\\" | \\"Red\\")[]
.
\\nThe filter method
.filter(k => Number.isNaN(k))
is used to ensure support for numeric enums by extracting the numeric keys from reverse mapping.
This typed array can then be looped through using a for..of
loop:
for (const tl of enumKeys(TrafficLight)) {\\n const value = TrafficLight[tl]\\n\\n if (typeof value === \\"string\\") {\\n console.log(`Value: ${TrafficLight[tl]}`)\\n }\\n}\\n\\n
\\nWe use
for..of
rather thanfor…in
in our for loop because the latter returns an index of items in our typed array.
Lodash is a JavaScript library that provides many utility methods for common programming tasks. Such methods use the functional programming paradigm to let us write more concise and readable code.
\\nTo install Lodash into our project, we can run the following command:
\\nnpm install lodash --save\\n\\n
The npm install lodash
command will install the module, and the save
flag will update the contents of the package.json
file.
It turns out we can leverage its forIn
method to iterate over an enum in TypeScript:
import { forIn } from \'lodash\'\\n\\nenum TrafficLight {\\n Green = 1,\\n Yellow,\\n Red,\\n}\\n\\nforIn(TrafficLight, (value, key) => console.log(key, value))\\n\\n
The forIn
method iterates over both keys and values of a given object, invoking, in its simplest form, a given function for each (key, value)
pair. It is essentially a combination of the Object.keys()
and Object.values()
methods.
As we might expect, if we run the example above, we’ll get both string and numeric keys:
\\n1 Green\\n2 Yellow\\n3 Red\\nGreen 1\\nYellow 2\\nRed 3\\n\\n
As before, we can easily filter out string or numeric keys, depending on our needs:
\\nimport { forIn } from \'lodash\'\\n\\nenum TrafficLight {\\n Green = 1,\\n Yellow,\\n Red,\\n}\\n\\nforIn(TrafficLight, (value, key) => {\\n if (isNaN(Number(key))) {\\n console.log(key, value)\\n }\\n})\\n\\n
The example above prints the following:
\\nGreen 1\\nYellow 2\\nRed 3\\n\\n
In this case, the type of key
is string
, whereas the type of value
is TrafficLight
. Hence, this solution preserves the typing of the value
:
import { forIn } from \'lodash\'\\n\\nenum TrafficLight {\\n Green = \\"G\\",\\n Yellow = \\"Y\\",\\n Red = \\"R\\"\\n}\\n\\nforIn(TrafficLight, (value, key) => {\\n if (isNaN(Number(key))) {\\n console.log(key, value)\\n }\\n})\\n\\n
As we might expect, the example above prints the following:
\\nGreen G\\nYellow Y\\nRed R\\n\\n
getEnumKeys()
for reusable enum key extractionTo simplify and encapsulate the process of retrieving the keys of a TypeScript enum, you can use the reusable utility function getEnumKeys()
. This helper works with both numeric and string enums and filters out any reverse-mapping numeric keys automatically. It returns a clean array of enum keys as strings, making enum key extraction straightforward and consistent across your codebase:
/**\\n* Utility to get the string keys of a TypeScript enum.\\n* Works with both numeric and string enums.\\n* @param enumObj The enum object\\n* @returns Array of enum keys as strings\\n*/\\nfunction getEnumKeys<T extends object>(enumObj: T): (keyof T)[] {\\nreturn Object.keys(enumObj).filter(key => isNaN(Number(key))) as (keyof T)[];\\n}\\n\\n// Example usage:\\n\\nenum Colors {\\nRed = \'RED\',\\nGreen = \'GREEN\',\\nBlue = \'BLUE\',\\n}\\n\\nconst keys = getEnumKeys(Colors);\\nconsole.log(keys); // Output: [\'Red\', \'Green\', \'Blue\']\\n
In this article, we explored multiple techniques to iterate through enums in TypeScript, including built-in object methods, for
loops, and third-party libraries like Lodash. By leveraging methods such as Object.keys()
and Object.values()
, we learned how to handle both string and numeric enums. Additionally, we saw how using for..in
and for..of
loops, combined with type filtering, can provide flexibility when iterating over enums.
As usual, there’s no unique “right” solution. The way you iterate over the key/values of your enums strongly depends on what you have to do and whether you wish to preserve the enum typing.
\\nFor more insights, check out these related articles from our blog:
\\nThanks for reading, and happy coding!
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\n<head>
tag in React Router\\n <head>
tag in React Router\\n React Router has long been a popular routing solution for SPAs, developed by the team behind Remix. Incremental improvements to the routing library brought React Router and Remix closer in functionality, leading to their eventual merger into React Router v7. With this recent release, React Router can be used as either a routing library or a full-stack framework, incorporating the entire functionality of Remix. It also includes React v19 as a dependency.
\\nThis article demonstrates how to build an SSR application with React Router v7 by creating a book tracking app using tools like Radix Primitives, React Icons, and Tailwind CSS. Prior knowledge of React.js, TypeScript, and basic data fetching concepts like actions and loaders is helpful but not required. The final project source code can be found here.
\\nNode.js v20 is the minimum requirement for running React Router, so make sure your device runs that version or something higher:
\\nnode --version\\n\\n
Next, install the React Router framework by running npm create vite
. “React Router v7” is available as one of the options under React Vite templates. Selecting this option will redirect you to the React Router framework CLI to complete the installation.
For that reason, this tutorial will go straight to using the React Router CLI. Here, the title of the example project is react-router-ssr
.
Open your terminal and run the following command:
\\nnpx create-react-router@latest react-router-ssr\\n\\n
The CLI will ask if you want to initialize a git repo for the project. Check “yes” if you want that. It will also ask if you want to install the dependencies using npm. Here are both options checked:
\\nThis will create a folder with whatever you named your project. Change into that directory, then start the development server of the application:
\\ncd react-router-ssr\\nnpm run dev\\n\\n
After that, open your browser and visit the URL http://localhost:5173
, where you should be greeted with a homepage that looks like this:
With that, you have successfully installed the React Router Framework.
\\nSince this tutorial doesn’t involve deploying the app with Docker, you can safely remove all Docker-related files from the source code for a cleaner codebase. These files — .dockerignore
, Dockerfile
, Dockerfile.bun
, and Dockerfile.pnpm
— are included in the template for cases where Docker deployment is needed.
In order to use React Router v7 for SSR, make sure ssr
is set to true
in the React Router configuration file. It is set to true
by default. Open the react-router.config.ts
file in your code editor to confirm:
//router.config.ts\\n\\nimport type { Config } from \'@react-router/dev/config\';\\n\\nexport default {\\n // Config options...\\n // Server-side render by default, to enable SPA mode set this to `false`\\n ssr: true,\\n} satisfies Config;\\n\\n
This tutorial uses a “light mode” theme for the app so you need to disable the dark mode in Tailwind CSS. Open the app/app.css
file and comment out all the “dark mode” styles:
// routes/app.css\\n\\n@tailwind base;\\n@tailwind components;\\n@tailwind utilities;\\n\\nhtml,\\nbody {\\n /* @apply bg-white dark:bg-gray-950; */\\n @media (prefers-color-scheme: dark) {\\n /* color-scheme: dark; */\\n }\\n}\\n\\n
After that, you’ll create your first SSR page. You will define all your routes (route modules) inside the app/routes/
folder, but home.tsx
will serve as the first page. There are also going to be other routes that use it as a frame. Create the file app/routes/home.tsx
.
Inside app/routes/home.tsx
, export the <Home />
component that contains the following:
// app/routes/home.tsx\\n\\nimport { Outlet } from \'react-router\';\\nimport { Fragment } from \'react/jsx-runtime\';\\nimport Header from \'~/components/Header\';\\nimport Footer from \'~/components/Footer\';\\n\\nexport default function Home() {\\n return (\\n <Fragment>\\n <Header />\\n <main className=\'max-w-screen-lg mx-auto my-4\'>\\n <Outlet />\\n </main>\\n <Footer />\\n </Fragment>\\n );\\n}\\n\\n
The file imports two React components you will create later (<Header />
and <Footer />
) and the <Outlet />
component from React Router. <Outlet />
renders the components of any nested route that uses home.tsx
as its layout.
To display something on the page, you’ll need to create the imported custom components. Start by modifying the app/welcome
folder that comes with the template:
app/welcome
folder and create a new folder named app/components
,welcome
folder to components
and delete all the files inside itNext, in the app/components
folder, create two new files: Header.tsx
and Footer.tsx
.
The <Header />
component will display a <header>
that will persist for most of the app. Here is the code for it:
// app/components/Header.tsx\\n\\nimport { Link } from \'react-router\';\\nimport BookForm from \'./BookForm\';\\n\\nexport default function Header() {\\n return (\\n <header className=\'flex justify-between items-center px-8 py-4\'>\\n <h1 className=\'text-3xl font-medium\'>\\n <Link to=\'/\'>Book Tracker App</Link>\\n </h1>\\n <BookForm />\\n </header>\\n );\\n}\\n\\n
The Header.tsx
file imported <Link />
from React Router, which is an optimized navigator — <a>
tag — for the framework. It also imported a component <BookForm />
that does not exist yet. Finally, the file added some Tailwind CSS styles so that the HTML elements look good on a page.
Next, create the <BookForm />
component. But for that, you first need to install Radix’s headless dialog component. You will eventually use it to create a dialog form for adding a new book to track. This is also a good time to install React Icons as you will need it for some components later on:
npm install @radix-ui/react-dialog react-icons\\n\\n
When the packages are installed, create a new file inside the app/components
folder called BookForm.tsx
:
// app/components/BookForm.tsx\\n\\nimport { useState } from \'react\';\\nimport { Form } from \'react-router\';\\nimport * as Dialog from \'@radix-ui/react-dialog\';\\nimport Button from \'./Button\';\\n\\nexport default function BookForm() {\\n const [isOpen, setIsOpen] = useState<boolean>(false);\\n\\n return (\\n <Dialog.Root open={isOpen} onOpenChange={setIsOpen}>\\n <Dialog.Trigger asChild>\\n <Button>Add Book</Button>\\n </Dialog.Trigger>\\n <Dialog.Portal>\\n <Dialog.Overlay className=\'bg-black/50 fixed inset-0\' />\\n <Dialog.Content className=\'bg-white fixed top-1/2 left-1/2 -translate-y-1/2 -translate-x-1/2 px-8 py-4 w-5/6 max-w-sm\'>\\n <Dialog.Title className=\'font-medium text-xl py-2\'>\\n Add New Book\\n </Dialog.Title>\\n <Dialog.Description>Start tracking a new book</Dialog.Description>\\n <Form\\n method=\'post\'\\n onSubmit={() => setIsOpen(false)}\\n action=\'/?index\'\\n className=\'mt-2\'\\n >\\n <div>\\n <label htmlFor=\'title\'>Book Title</label>\\n <br />\\n <input\\n name=\'title\'\\n type=\'text\'\\n className=\'border border-black\'\\n id=\'title\'\\n required\\n />\\n </div>\\n <div>\\n <label htmlFor=\'author\'>Author</label>\\n <br />\\n <input\\n name=\'author\'\\n type=\'text\'\\n id=\'author\'\\n className=\'border border-black\'\\n required\\n />\\n </div>\\n <div>\\n <label htmlFor=\'isbn\'>ISBN (Optional)</label>\\n <br />\\n <input\\n name=\'isbn\'\\n type=\'text\'\\n id=\'isbn\'\\n className=\'border border-black\'\\n />\\n </div>\\n <div className=\'mt-4 text-right\'>\\n <Dialog.Close asChild>\\n <Button variant=\'cancel\'>Cancel</Button>\\n </Dialog.Close>\\n <Button type=\'submit\'>Save</Button>\\n </div>\\n </Form>\\n </Dialog.Content>\\n </Dialog.Portal>\\n </Dialog.Root>\\n );\\n}\\n\\n
The BookForm.tsx
component used React’s useState
to control the dialog box and Tailwind CSS to style everything. Notice that, in turn, the file imported a component <Button />
that does not exist yet.
Next, create the <Button />
component:
// app/components/Button.tsx\\n\\nimport type { ComponentProps, ReactNode } from \'react\';\\n\\ninterface Props extends ComponentProps<\'button\'> {\\n children?: ReactNode;\\n variant?: \'cancel\' | \'delete\' | \'normal\';\\n}\\n\\nexport default function Button({\\n children,\\n variant = \'normal\',\\n ...otherProps\\n}: Props) {\\n const variantStyles: Record<NonNullable<typeof variant>, string> = {\\n cancel: \'text-red-700\',\\n normal: \'text-white bg-purple-700 hover:bg-purple-800\',\\n delete: \'text-white bg-red-700 hover:bg-red-800\',\\n };\\n return (\\n <button\\n className={`rounded-full px-4 py-2 text-center text-sm ${variantStyles[variant]}`}\\n {...otherProps}\\n >\\n {children}\\n </button>\\n );\\n}\\n\\n
As seen in this button component, it accepts some props like children
and variant
. It also has three variants (cancel
, normal
, and delete
) with their own unique styling.
Finally, for the home.tsx
route, create the <Footer />
component:
// app/components/Footer.tsx\\n\\nimport { Link } from \'react-router\';\\n\\nexport default function Footer() {\\n return (\\n <footer className=\'text-center my-5\'>\\n <Link to=\'/about\' className=\'text-purple-700\'>\\n About the App\\n </Link>\\n </footer>\\n );\\n}\\n\\n
With that, you should have a basic structure for your app up and running:
\\nSSR can be roughly divided into two techniques: dynamic site generation, which is when the server generates pages for every individual request, and static site generation (SSG), which is when pages are already generated and stored on the server. For SSG pages, the content on the page is the same (static) no matter who requests it.
\\nDynamic SSR uses server-side logic to generate pages when requested. The server sends the markup for those pages to the client side (browser) where they are subsequently hydrated. However, in static sites, all the files necessary for a page (HTML, CSS, JavaScript) are generated at build time. They are then sent to the client more quickly as there is no need for the server to generate them dynamically.
\\nThere are upsides and downsides to using any of these approaches. A good rule of thumb is to use SSG when you want all the users to see the same thing (for example blog posts, contact, and About pages) and that page does not need frequent updates. On the other hand, if it is a page where the content frequently changes, or where different users need to access different resources unique to them, then dynamic SSR is the way to go. It is also worth noting that SSG pages are easy to deploy as they can be served using a CDN.
\\nFor the example project, the /about
route is going to be generated with SSG. React Router v7 lets developers build an application that combines these two techniques of rendering in one app if they want to.
Open the React Router config and set up routes to pre-render (or statically generate). In this case, the app will only pre-render the /about
route (or page):
// react-router.config.ts\\n\\nimport type { Config } from \'@react-router/dev/config\';\\n\\nexport default {\\n // Config options...\\n // Server-side render by default, to enable SPA mode set this to `false`\\n ssr: true,\\n async prerender() {\\n return [\'about\'];\\n },\\n} satisfies Config;\\n\\n
Create the app/routes/about.tsx
file. It will contain static content that will be in the About page:
// app/routes/about.tsx\\n\\nimport { Fragment } from \'react/jsx-runtime\';\\nimport { Link } from \'react-router\';\\n\\nexport default function About() {\\n return (\\n <Fragment>\\n <h1 className=\'px-8 py-4 text-3xl font-medium\'>\\n <Link to=\'/\'>Book Tracker App</Link>\\n </h1>\\n <main className=\'max-w-screen-lg mx-auto my-4\'>\\n <p className=\'mb-2 mx-5\'>\\n This app was built for readers who love the simplicity of tracking\\n what they’ve read and what they want to read next. With just the\\n essentials, it’s designed to keep your reading list organized without\\n the distractions of unnecessary features.\\n </p>\\n <p className=\'mb-2 mx-5\'>\\n We believe the joy of reading should stay front and center. Whether\\n it’s noting down the books you’ve finished or keeping a simple list of\\n what’s next, this app focuses on helping you stay connected to your\\n reading journey in the most straightforward way possible.\\n </p>\\n <p className=\'mb-2 mx-5\'>\\n Sometimes less is more, and that’s the philosophy behind this app. By\\n keeping things minimal, it offers a clean and easy way to manage your\\n reading habits so you can spend less time tracking and more time\\n diving into your next great book.\\n </p>\\n </main>\\n </Fragment>\\n );\\n}\\n\\n
This section will explain how to configure routing in the React Router framework.
\\n\\nBefore viewing the About page you just created on the browser, you need to configure React Router to display that route module (about.tsx
) whenever a visitor navigates to /about
. This configuration happens in app/routes.ts
. The file is where one lays out the entire hierarchy of the routes in their app:
// app/routes.ts\\nimport { type RouteConfig, index, route } from \'@react-router/dev/routes\';\\n\\nexport default [\\n index(\'routes/home.tsx\'),\\n route(\'about\', \'routes/about.tsx\'),\\n] satisfies RouteConfig;\\n\\n
What the above instructions do is import the route
function from React Router. The first argument of route
is the URL to match and the second argument is the route module to display when that URL is matched. With all of that, you should now be able to navigate to the static About page:
Run npm run build
on the terminal when you want to build your app — to bundle the app and generate the static About page inside a build/
folder.
But the /home
and /about
routes are not the only routes the example app will have. Set up the routing for the entire application:
// app/routes.ts\\nimport {\\n type RouteConfig,\\n index,\\n route,\\n layout,\\n} from \'@react-router/dev/routes\';\\n\\nexport default [\\n layout(\'routes/home.tsx\', [\\n index(\'routes/book-list.tsx\'),\\n route(\'book/:bookId\', \'routes/book.tsx\'),\\n ]),\\n route(\'about\', \'routes/about.tsx\'),\\n] satisfies RouteConfig;\\n\\n
As you can see here, the routes make use of a layout
function that has two arguments:
Whenever the user navigates to any of the nested routes, React Router displays the parent layout
route first. After that, it takes advantage of <Outlet />
component to fill in data unique to the route the user navigated to.
loader
functions are a unique concept in React Router. They are functions exported from route modules that return data necessary for a route to render. They are also only supposed to be used on route modules and nowhere else.
In the example app, you create the route that lists all the available books a user is tracking. That new route module will use loaders to fetch whenever the route needs to load (in this case, stored books data). For this, first create a data storage solution, which is merely a JavaScript array for illustration purposes.
\\nCreate the app/model.ts
file:
// app/model.ts\\n\\ninterface Book {\\n id: number;\\n title: string;\\n author: string;\\n isFinished: boolean;\\n isbn?: string;\\n rating?: 1 | 2 | 3 | 4 | 5;\\n}\\n\\ninterface Data {\\n books: Book[];\\n}\\n\\nconst storage: Data = {\\n books: [\\n {\\n id: 0,\\n title: `Numbers Don\'t Lie: 71 Stories to Help Us Understand the Modern World`,\\n author: \'Vaclav Smil\',\\n isbn: `978-0241454411`,\\n isFinished: true,\\n rating: 1,\\n },\\n ],\\n};\\nexport { type Book, storage };\\n\\n
Next, create a new route to display all the books in the storage
object. To do this, create a route module named book-list.tsx
:
// app/routes/book-list.tsx\\n\\nimport type { Route } from \'./+types/book-list\';\\nimport BookCard from \'~/components/BookCard\';\\nimport { storage } from \'~/model\';\\n\\nexport async function loader({}: Route.LoaderArgs) {\\n return storage;\\n}\\n\\nexport default function BookList({ loaderData }: Route.ComponentProps) {\\n return (\\n <div className=\'mx-5\'>\\n {loaderData.books\\n .slice()\\n .reverse()\\n .map((book) => (\\n <BookCard key={book.id} {...book} />\\n ))}\\n </div>\\n );\\n}\\n\\n
As you can see, this route module exports a loader
function. Then the route’s main component gets what the loader
function returns in loaderData
. But before you see the output of these changes, you need to do a few extra things.
Create the imported component BookCard
that does not exist yet:
// app/components/BookCard.tsx\\n\\nimport { Link } from \'react-router\';\\nimport { IoCheckmarkCircle } from \'react-icons/io5\';\\nimport type { Book } from \'~/model\';\\n\\nexport default function BookCard({\\n id,\\n title,\\n author,\\n isFinished,\\n isbn,\\n rating,\\n}: Book) {\\n return (\\n <Link\\n to={`book/${id}`}\\n className=\'block flex px-5 py-4 max-w-lg mb-2.5 border border-black hover:shadow-md\'\\n >\\n <div className=\'w-12 shrink-0\'>\\n {isbn ? (\\n <img\\n className=\'w-full h-16\'\\n src={`https://covers.openlibrary.org/b/isbn/${isbn}-S.jpg`}\\n alt={`Cover for ${title}`}\\n />\\n ) : (\\n <span className=\'w-full h-16 block bg-gray-200\'></span>\\n )}\\n </div>\\n <div className=\'flex flex-col ml-4 grow\'>\\n <span className=\'font-medium\'>{title}</span>\\n <span>{author}</span>\\n <div className=\'flex justify-between\'>\\n <span>Rating: {rating ? `${rating}/5` : \'None\'}</span>\\n {isFinished && (\\n <span className=\'flex items-center gap-1\'>\\n Finished <IoCheckmarkCircle className=\'text-green-600\' />\\n </span>\\n )}\\n </div>\\n </div>\\n </Link>\\n );\\n}\\n\\n
The <BookCard />
component is a clickable card. It contains the most important info about a book entry like title, author, and possibly a cover, among other things.
After that, open the app/routes.tsx
file and comment out the other route. This is so that React Router won’t throw errors as there is no route module for that defined route yet:
// app/routes.tsx\\n\\n...\\nexport default [\\n layout(\'routes/home.tsx\', [\\n index(\'routes/book-list.tsx\'),\\n // route(\'book/:bookId\', \'routes/book.tsx\'),\\n ]),\\n route(\'about\', \'routes/about.tsx\'),\\n] satisfies RouteConfig;\\n\\n
With all of that done, you should have a homepage that reads data from storage
in app/model.ts
:
This means that any book added to storage
should show up in the book-list.tsx
route.
Whenever a user clicks on a book card, the app should navigate to a new page that displays details about that book. In order to set this up, first uncomment the route to the /book/:bookId
page:
// app/routes.ts\\n\\n...\\nexport default [\\n layout(\'routes/home.tsx\', [\\n index(\'routes/book-list.tsx\'),\\n route(\'book/:bookId\', \'routes/book.tsx\'),\\n ]),\\n route(\'about\', \'routes/about.tsx\'),\\n] satisfies RouteConfig;\\n\\n
Then, create the associated route module. The file will be app/routes/book.tsx
, and it will contain a loader that returns the details of whatever book the user clicks on:
// app/routes/book.tsx\\n\\nimport { useState, type ChangeEvent } from \'react\';\\nimport { Link, Form } from \'react-router\';\\nimport { IoArrowBackCircle, IoStarOutline, IoStar } from \'react-icons/io5\';\\nimport type { Route } from \'./+types/book\';\\nimport Button from \'~/components/Button\';\\nimport { storage, type Book } from \'~/model\';\\n\\nexport async function loader({ params }: Route.LoaderArgs) {\\n const { bookId } = params;\\n const book: Book | undefined = storage.books.find(({ id }) => +bookId === id);\\n return book;\\n}\\n\\nexport default function Book({ loaderData }: Route.ComponentProps) {\\n const [isFinished, setIsFinished] = useState<boolean>(\\n loaderData?.isFinished || false\\n );\\n const [rating, setRating] = useState<number>(Number(loaderData?.rating));\\n return (\\n <div className=\'mx-5\'>\\n <Link to=\'/\' className=\'text-purple-700 flex items-center gap-1 w-fit\'>\\n <IoArrowBackCircle /> Back to home\\n </Link>\\n <div className=\'flex mt-5 max-w-md\'>\\n <div className=\'w-48 h-72 shrink-0\'>\\n {loaderData?.isbn ? (\\n <img\\n className=\'w-full h-full\'\\n src={`https://covers.openlibrary.org/b/isbn/${loaderData.isbn}-L.jpg`}\\n alt={`Cover for ${loaderData.title}`}\\n />\\n ) : (\\n <span className=\'block w-full h-full bg-gray-200\'></span>\\n )}\\n </div>\\n <div className=\'flex flex-col ml-5 grow\'>\\n <span className=\'font-medium text-xl\'>{loaderData?.title}</span>\\n <span>{loaderData?.author}</span>\\n <Form method=\'post\'>\\n <span className=\'my-5 block\'>\\n <input\\n type=\'checkbox\'\\n name=\'isFinished\'\\n id=\'finished\'\\n checked={isFinished}\\n onChange={(e: ChangeEvent<HTMLInputElement>) =>\\n setIsFinished(e.target.checked)\\n }\\n />\\n <label htmlFor=\'finished\' className=\'ml-2\'>\\n Finished\\n </label>\\n </span>\\n <div className=\'mb-5\'>\\n <span>Your Rating:</span>\\n <span className=\'text-3xl flex\'>\\n {[1, 2, 3, 4, 5].map((num) => {\\n return (\\n <span key={num} className=\'flex\'>\\n <input\\n className=\'hidden\'\\n type=\'radio\'\\n name=\'rating\'\\n id={`rating-${num}`}\\n value={num}\\n checked={rating === num}\\n onChange={(e: ChangeEvent<HTMLInputElement>) =>\\n setRating(+e.target.value)\\n }\\n />\\n <label htmlFor={`rating-${num}`}>\\n {num <= rating ? <IoStar /> : <IoStarOutline />}\\n </label>\\n </span>\\n );\\n })}\\n </span>\\n </div>\\n <div className=\'text-right\'>\\n <Button type=\'submit\'>Save</Button>\\n <Button variant=\'delete\' type=\'button\'>\\n Delete Book\\n </Button>\\n </div>\\n </Form>\\n </div>\\n </div>\\n </div>\\n );\\n}\\n\\n
This file contains several key functionalities. First, after the imports, there’s a loader that searches the storage
object and retrieves the book object corresponding to the ID in the URL parameters. For instance, if a user navigates to /book/0
, the loader will fetch the details of the book with an ID of 0
. Additionally, the route module allows users to modify book details. Users can mark whether they’ve finished the book, assign a rating out of five stars, and save their changes. They also have the option to delete the book entirely.
With all of that done, the app should now look like this:
\\nNow the basic loaders of our entire application are set. It’s time to move on to adding and deleting books from the book tracker.
\\nLike loaders, actions can only run in route modules — route modules being files inside the app/routes/
directory. Actions are functions that handle form submissions in a particular route. Actions that are supposed to run on the browser are exported as clientAction
while actions that run on the server are exported as action
.
The action accepts parameters such as URL params (as params
), and submitted data to the route (as request
). request
here is implemented as an instance of the Request Web API so it works with all of the API’s functionality. These parameters all come from the Route.ActionArgs
type that every route module has a unique version of inside .react-router
.
The first thing this tutorial will use Server Actions to do is add a new book to storage
. Add this action
function to the book-list.tsx
module:
// app/routes/book-list.tsx\\n...\\nexport async function action({ request }: Route.ActionArgs) {\\n let formData = await request.formData();\\n let title = formData.get(\'title\') as string | null;\\n let author = formData.get(\'author\') as string | null;\\n let isbn = formData.get(\'isbn\') as string | undefined;\\n if (title && author) {\\n storage.books.push({\\n id: storage.books.length,\\n title,\\n author,\\n isbn: isbn || undefined,\\n isFinished: false,\\n });\\n }\\n\\n return storage;\\n}\\n\\n...\\n\\n
With that function in place, you should be able to add new books to the application:
\\nAfter filling out the form, the new book should appear on the book-list.tsx
route:
The next functionality this tutorial will use Server Actions to do is make sure a user can edit and delete a book entry. To achieve this, add an action to the book.tsx
route. This action will update the storage
object with new info that belongs to a particular book, and delete a book if the request method to the route is \\"DELETE\\"
:
// app/routes/book.tsx\\n\\nimport { useState, type ChangeEvent } from \'react\';\\nimport { Link, Form, redirect, useSubmit } from \'react-router\';\\nimport { IoArrowBackCircle, IoStarOutline, IoStar } from \'react-icons/io5\';\\nimport type { Route } from \'./+types/book\';\\nimport Button from \'~/components/Button\';\\nimport { storage, type Book } from \'~/model\';\\n\\nexport async function action({ params, request }: Route.ActionArgs) {\\n let formData = await request.formData();\\n let { bookId } = params;\\n let newRating = (Number(formData.get(\'rating\')) ||\\n undefined) as Book[\'rating\'];\\n let isFinished = Boolean(formData.get(\'isFinished\'));\\n if (request.method === \'DELETE\') {\\n storage.books = storage.books.filter(({ id }) => +bookId !== id);\\n } else if (newRating && storage.books[+bookId]) {\\n Object.assign(storage.books[+bookId], {\\n isFinished,\\n rating: newRating,\\n });\\n }\\n return redirect(\'/\');\\n}\\n\\nexport async function loader({ params }: Route.LoaderArgs) {\\n const { bookId } = params;\\n const book: Book | undefined = storage.books.find(({ id }) => +bookId === id);\\n return book;\\n}\\n\\nexport default function Book({ loaderData }: Route.ComponentProps) {\\n const [isFinished, setIsFinished] = useState<boolean>(\\n loaderData?.isFinished || false\\n );\\n const [rating, setRating] = useState<number>(Number(loaderData?.rating));\\n\\n const submit = useSubmit();\\n\\n function deleteBook(bookId: number | undefined = loaderData?.id) {\\n const confirmation = confirm(\'Are you sure you want to delete this book?\');\\n confirmation && bookId &&\\n submit(\\n { id: bookId },\\n {\\n method: \'delete\',\\n }\\n );\\n }\\n\\n return (\\n <div className=\'mx-5\'>\\n <Link to=\'/\' className=\'text-purple-700 flex items-center gap-1 w-fit\'>\\n <IoArrowBackCircle /> Back to home\\n </Link>\\n <div className=\'flex mt-5 max-w-md\'>\\n <div className=\'w-48 h-72 shrink-0\'>\\n {loaderData?.isbn ? (\\n <img\\n className=\'w-full h-full\'\\n src={`https://covers.openlibrary.org/b/isbn/${loaderData.isbn}-L.jpg`}\\n alt={`Cover for ${loaderData.title}`}\\n />\\n ) : (\\n <span className=\'block w-full h-full bg-gray-200\'></span>\\n )}\\n </div>\\n <div className=\'flex flex-col ml-5 grow\'>\\n <span className=\'font-medium text-xl\'>{loaderData?.title}</span>\\n <span>{loaderData?.author}</span>\\n <Form method=\'post\'>\\n <span className=\'my-5 block\'>\\n <input\\n type=\'checkbox\'\\n name=\'isFinished\'\\n id=\'finished\'\\n checked={isFinished}\\n onChange={(e: ChangeEvent<HTMLInputElement>) =>\\n setIsFinished(e.target.checked)\\n }\\n />\\n <label htmlFor=\'finished\' className=\'ml-2\'>\\n Finished\\n </label>\\n </span>\\n <div className=\'mb-5\'>\\n <span>Your Rating:</span>\\n <span className=\'text-3xl flex\'>\\n {[1, 2, 3, 4, 5].map((num) => {\\n return (\\n <span key={num} className=\'flex\'>\\n <input\\n className=\'hidden\'\\n type=\'radio\'\\n name=\'rating\'\\n id={`rating-${num}`}\\n value={num}\\n checked={rating === num}\\n onChange={(e: ChangeEvent<HTMLInputElement>) =>\\n setRating(+e.target.value)\\n }\\n />\\n <label htmlFor={`rating-${num}`}>\\n {num <= rating ? <IoStar /> : <IoStarOutline />}\\n </label>\\n </span>\\n );\\n })}\\n </span>\\n </div>\\n <div className=\'text-right\'>\\n <Button type=\'submit\'>Save</Button>\\n <Button\\n variant=\'delete\'\\n type=\'button\'\\n onClick={() => deleteBook()}\\n >\\n Delete Book\\n </Button>\\n </div>\\n </Form>\\n </div>\\n </div>\\n </div>\\n );\\n}\\n\\n
Now, a user should be able to save details for every book entry. They should also be able to delete any book entry from the app (or storage
):
With that, all the basic functionalities of the app are done.
\\nStatus codes are a property of the responses from a server that shows the status of a client’s request. It can return:
\\n200
, which means OK)201
, which means the request was successful and an entry was created404
, which means that a requested server resource was not foundIn React Router, every requested page returns with a 200
status code, which is a generic way of saying that a request was successful. It also returns a 404
status code when a URL path has no corresponding route module. However, the React Router framework also allows a developer to send custom status codes to the client. Using them makes for an improved and more communicative API for the client. The client gets to know the exact status of their requests.
Using this feature in React Router v6 requires the use of the data
function from react-router
. The function accepts data to return as the first argument (loaderData
or actionData
). The second argument is what contains a custom status code for a request.
Modify the app by responding with appropriate status codes. First, return a 201
(Created) when a user creates a new entry:
// app/routes/book-list.tsx\\n\\n// Imports\\nimport { data } from \'react-router\';\\n...\\n\\nexport async function action({ request }: Route.ActionArgs) {\\n let formData = await request.formData();\\n let title = formData.get(\'title\') as string | null;\\n let author = formData.get(\'author\') as string | null;\\n let isbn = formData.get(\'isbn\') as string | undefined;\\n if (title && author) {\\n storage.books.push({\\n id: storage.books.length,\\n title,\\n author,\\n isbn: isbn || undefined,\\n isFinished: false,\\n });\\n }\\n return data(storage, { status: 201 });\\n}\\n\\n...\\n\\n
Next is to return 404
(Not found) when the user navigates to a book/:bookId
route that does not exist:
// app/routes/book.tsx\\n\\n// Imports\\n...\\nimport { Link, Form, redirect, useSubmit, data } from \'react-router\';\\n...\\n\\n// Route module loader\\nexport async function loader({ params }: Route.LoaderArgs) {\\n const { bookId } = params;\\n const book: Book | undefined = storage.books.find(({ id }) => +bookId === id);\\n\\n if (!book) throw data(null, { status: 404 });\\n\\n return book;\\n}\\n\\n
These examples are to illustrate how one can easily add status codes. You can add as many more status codes as you think is appropriate for the routes.
\\n<head>
tag in React RouterThe HTML <head>
tag is a very important tag for the SEO performance of a web page. The React Router framework allows developers to update the <meta>
tags in the <head>
tag for as many pages as they want to. These <meta>
tags contain the metadata (title, description, keywords, view-port) of a particular page.
For the example project, add meta tags to the pages. Observe how you need to export a function meta
in the route modules to do that:
// app/routes/home.tsx\\n\\n// Imports\\n...\\nimport type { Route } from \'./+types/home\';\\n...\\n\\n\\nexport function meta({}: Route.MetaArgs) {\\n return [\\n { title: \'Book Tracker App\' },\\n { name: \'description\', content: \'Book Tracker Application\' },\\n ];\\n}\\n\\n...\\n\\n
<meta>
tags for the About page:
// app/routes/about.tsx\\n\\n// Imports\\n...\\nimport type { Route } from \'./+types/book\';\\n...\\n\\nexport function meta({}: Route.MetaArgs) {\\n return [\\n { title: \'About Book Tracker App\' },\\n { name: \'description\', content: \'About this Application\' },\\n ];\\n}\\n\\n
Finally, here is a <meta>
tag for the book.tsx
route:
// app/routes/book.tsx\\n\\n// Imports\\n...\\nimport type { Route } from \'./+types/book\';\\n...\\n\\nexport function meta({ data }: Route.MetaArgs) {\\n return [{ title: `Edit \\"${data.title}\\"` }];\\n}\\n\\n
Notice the destructured data
object, which is an argument for the meta
function. Here, data
represents whatever the loader of that route returned.
With these changes made, the app should now have updated meta info in the browser’s tab bar.
\\n<head>
tag in React RouterHTML <link>
tags define the relationship between a page and an external resource. It is mostly used to import CSS files and icons. React Router allows developers to add <links>
to individual pages. This can be useful for features like adding custom favicons to a route.
In a route module, export a links
function:
export function links() {\\n return [\\n {\\n rel: \'icon\',\\n href: \'/favicon.png\',\\n type: \'image/png\',\\n },\\n ];\\n}\\n\\n
Inside the function, export an array. Each individual item of the array should be an object that contains properties that are attributes of a <link>
tag. The values of those properties should be the values of their corresponding attributes in an HTML <link>
tag.
HTTP headers in React Router allow the server to pass additional data to the client (along with the requested payload). They are used to send cookies to the browser, set up caching, and much more. You can add headers to your route modules by exporting a headers
function. For example:
// Route module\\n\\nexport function headers(){\\n return {\\n \\"Content-Disposition\\": \\"inline\\",\\n ...\\n \\"Header Name\\": \\"Header value\\"\\n }\\n}\\n\\n
Now the client will get the response with your custom set headers.
\\nInstead of releasing Remix v3, the team behind the framework merged Remix with React Router, resulting in React Router v7. With the release of React v19, the official React documentation now recommends using a framework to take full advantage of the new version. It specifically mentions Remix, now integrated as React Router v7, as one of the suggested frameworks for developers.
\\nDespite this integration, there are notable differences between React Router v7 and Remix beyond one simply being the latest major version of the other. Here are a few of those differences:
\\napp/routes.ts
fileuseLoaderData()
Hook in Remix. Action data was also received using the useActionData()
Hook. While you can still do this in React Router v7, the framework recommends instead using the route’s component props for both loader and actions. The Route.ComponentProps
type is an object that contains loaderData
and actionData
you can destructure and use inside your route components. This is surely an improvement as it ensures better type safety in applications.react-router/
folder. Because of this, there are generated types for a Route’s component props, loader arguments, action arguments, meta-function arguments, and so much more. This enhances the type safety of an app’s source code tremendouslyThere are other differences between the two frameworks apart from the ones listed above. However, the React Router framework is definitely an improvement over the Remix framework.
\\nThis article explores server-side rendering (SSR) with React Router v7, which combines React Router and Remix into a full-stack framework for building modern SSR and static site generation (SSG) applications. We demonstrated these concepts by creating a book tracking app, and highlighting improvements in developer experience, type safety, and React v19 features, while comparing React Router v7 to Remix.
\\nBy following this guide, developers can learn to implement SSR, SSG, and advanced functionalities like loaders, actions, and meta tags in React applications.
\\nThe final code for the example project can be found here.
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nEditor’s note: This article was last reviewed and updated by Abiola Farounbi in January 2025 to cover how to accurately display PDFs within React applications after successfully generating them.
\\nThere are several reasons why you might want to incorporate a PDF document generator feature in your application, including:
\\nFortunately, the React ecosystem provides a wealth of tools offering simple ways to generate PDF documents directly within React applications.
\\nOne great tool is the react-pdf library, which simplifies the PDF generation process within your React project.
\\nIn this article, we’ll explore why you might want to implement a PDF document generation feature, the fundamentals of react-pdf, and advanced concepts. We’ll also compare react-pdf to other libraries and explore various methods for displaying PDFs within react applications.
\\nThere are many use cases for a PDF generator feature in a React application, such as offline access to the application’s content, report generation, and more. However, the most common use case for such a feature in modern apps is data exports — for example, invoices or brochures.
\\nTo put it into perspective, consider a thriving online business. Every sale means generating an invoice, a process that can quickly become tedious. In order to create a receipt for each customer, you would need to follow steps like:
\\nSure, that might work.
\\nBut consider this: what if the business gets hundreds of customers in a single day? This situation could result in a substantial waste of time and energy, as the same process would need to be repeated for each customer.
\\nSo, how do we mitigate this problem?
\\nThe best way to solve this issue is to automate this operation by using an API. This is where react-pdf and other similar libraries come in.
\\nWhile building a PDF generator from the ground up is technically possible, it’s crucial to weigh the pros and cons before taking this approach. Let’s explore this process vs. using a third-party library to see which aligns better with your project goals.
\\nPros of building a PDF generator from scratch include:
\\nHowever, crafting a PDF document generator from the ground up can be tedious and time-consuming. You would need to consider implementing the following features:
\\nAs you can tell, this can be quite tedious. Additionally, beyond the initial development effort, maintaining a custom PDF generator requires ongoing upkeep. Bug fixes, compatibility updates, and potential security vulnerabilities all become your responsibility.
\\nAlthough the customization and learning experience of building your own PDF generator can be valuable, the time investment and ongoing maintenance burden can be significant. Using a library like react-pdf relieves you of the burden of implementing all these features, allowing you to focus on the core logic of your application.
\\nTo install react-pdf, run the following terminal command:
\\nnpm i @react-pdf/renderer\\n\\n
The following block of code renders a basic PDF document in the browser:
\\nimport {\\n Document,\\n Page,\\n Text,\\n View,\\n StyleSheet,\\n PDFViewer,\\n} from \\"@react-pdf/renderer\\";\\n// Create styles\\nconst styles = StyleSheet.create({\\n page: {\\n backgroundColor: \\"#d11fb6\\",\\n color: \\"white\\",\\n },\\n section: {\\n margin: 10,\\n padding: 10,\\n },\\n viewer: {\\n width: window.innerWidth, //the pdf viewer will take up all of the width and height\\n height: window.innerHeight,\\n },\\n});\\n\\n// Create Document Component\\nfunction BasicDocument() {\\n return (\\n <PDFViewer style={styles.viewer}>\\n {/* Start of the document*/}\\n <Document>\\n {/*render a single page*/}\\n <Page size=\\"A4\\" style={styles.page}>\\n <View style={styles.section}>\\n <Text>Hello</Text>\\n </View>\\n <View style={styles.section}>\\n <Text>World</Text>\\n </View>\\n </Page>\\n </Document>\\n </PDFViewer>\\n );\\n}\\nexport default BasicDocument;\\n\\n
In this code:
\\nStyleSheet
module allows developers to apply CSS code to their PDF documents. Here, we are telling React to change the background color and the font color of our pagesviewer
object, we are using the width
and height
properties. As a result, this will tell react-pdf that we want the browser’s PDF viewer to take up all of the space on the pagePDFViewer
component will render a PDF viewer on the browserLet’s test it out! As the next step, render the BasicDocument
component to the DOM like so:
import BasicDocument from \\"./BasicDocument\\";\\nfunction App() {\\n return (\\n <div className=\\"App\\">\\n <BasicDocument />\\n </div>\\n );\\n}\\nexport default App;\\n\\n
You should see the following:
\\nWe can even reduce the viewer’s available space:
\\nconst styles = StyleSheet.create({\\n viewer: {\\n width: window.innerWidth / 3,\\n height: window.innerHeight / 2,\\n },\\n //further code...\\n});\\n\\n
In this snippet, we restricted the viewport’s width
and height
properties. This will decrease their available sizes on the page:
The react-pdf library offers a variety of components we can display in our generated PDF. In this section, we’ll discuss and demonstrate some of these components.
\\n\\nWe can display anchor links using the Link
component. This is handy for cases where you want to redirect the user to a website:
import { Link } from \\"@react-pdf/renderer\\";\\n<Text>\\n <Link src=\\"www.facebook.com\\">Go to Facebook</Link>\\n</Text>\\n\\n
Here, we are assigning the src
prop to Facebook’s website. When the user clicks on this piece of text, the app will redirect them to the page:
To attach annotations to your document, use the Note
component. One critical use case for this element is when you need to display comments in a file:
import { Note } from \\"@react-pdf/renderer\\";\\n<Note>This will take the user to Facebook</Note>\\n\\n
Hovering over this annotation will display the text we set:
\\nCanvas
The Canvas
component lets users draw content on the page. This is suitable for displaying simple diagrams and logos in SVG format.
This code snippet renders a triangle on the page:
\\nimport { Canvas } from \\"@react-pdf/renderer\\";\\n// Create styles\\nconst styles = StyleSheet.create({\\n canvas: {\\n backgroundColor: \\"black\\",\\n height: 500,\\n width: 500,\\n },\\n});\\n<Canvas\\n style={styles.canvas}\\n paint={\\n (painterObject) =>\\n painterObject\\n .save()\\n .moveTo(100, 100) //move to position 100,100\\n .lineTo(300, 100) //draw a line till 300, 100\\n .lineTo(300, 300) //draw another line till 300,300\\n .fill(\\"red\\") //when the diagram is drawn, set the background color to pink\\n }\\n/>\\n\\n
In the above snippet, we used the Canvas
component to display a diagram. The paint
prop is a callback function. One of its parameters is a painterObject
argument, which gives us access to drawing methods:
react-pdf also bundles an SVG
component to render SVG diagrams. Just like Canvas
, we can use this for rendering simple diagrams.
This piece of code renders a line on the page:
\\nimport { Line, Svg } from \\"@react-pdf/renderer\\";\\n// Create styles\\nconst styles = StyleSheet.create({\\n line: {\\n x1: \\"0\\", //starting coords are x1 and y1\\n y1: \\"0\\",\\n x2: \\"200\\", //ending coords:\\n y2: \\"200\\",\\n strokeWidth: 2,\\n stroke: \\"rgb(255,255,255)\\", //stroke color\\n },\\n});\\n<Svg width={\\"50%\\"} height={\\"50%\\"} style={{ backgroundColor: \\"blue\\" }}>\\n <Line style={styles.line} />\\n</Svg>\\n\\n
Here, we used Line
to render a line in the document. Notice that Line
is a child of the Svg
component:
We can also use the Polygon
component to render closed shapes like so:
import { Svg, Polygon } from \\"@react-pdf/renderer\\";\\n\\n<Svg width={\\"50%\\"} height={\\"50%\\"} style={{ backgroundColor: \\"blue\\" }}>\\n <Polygon\\n points=\\"100,100 200,100 200,250 100,250\\"\\n fill=\\"white\\" //color of background\\n stroke=\\"black\\" //color of border\\n strokeWidth={10} //border thickness\\n />\\n</Svg>\\n\\n
The points
prop accepts a dataset of coordinates. This will help the app render the graphic:
The Image
component gives us the ability to insert images over the network or on a local disk. This is great for displaying complex diagrams or screenshots.
This block of code renders a 500 by 500 pixel image on the PDF:
\\nimport { Image } from \\"@react-pdf/renderer\\";\\nconst styles = StyleSheet.create({\\n image: {\\n width: 500,\\n height: 500,\\n },\\n});\\n<Image\\n style={styles.image}\\n src=\\"https://images.pexels.com/photos/20066389/pexels-photo-20066389/free-photo-of-a-bubble-is-floating-in-the-sky-over-trees.jpeg?auto=compress&cs=tinysrgb&w=1260&h=750&dpr=2\\"\\n/> \\n\\n
The src
prop contains the source URL of the image that we want to render:
Now that we’ve gone through the fundamentals, let’s discuss some advanced concepts for using react-pdf to generate PDFs in a React project.
\\nJust like CSS, react-pdf lets developers use the flex
property, which allows for responsive design. This is handy for cases where you want your documents to scale up or down depending on the device’s screen size:
// Create styles. Notice that we have specified a flex direction.\\nconst styles = StyleSheet.create({\\n page: {\\n flexDirection: \\"column\\",\\n },\\n});\\n<Page size=\\"A4\\" style={styles.page}>\\n <View style={{ backgroundColor: \\"black\\", flex: 1 }}></View>\\n <View style={(styles.section, { backgroundColor: \\"pink\\", flex: 1 })}></View>\\n</Page>\\n\\n
In this piece of code, we used the flex
property on both of our View
components. This means that half the page will have a background color of black and the other half will have a pink colored background:
Page breaks are useful for ensuring that a certain element will always show up on the top of the page. We can enable page breaks via the break
prop like so:
// Create styles\\nconst styles = StyleSheet.create({\\n text: {\\n fontSize: 40,\\n },\\n});\\n// Create Document Component\\n<Page>\\n <Text break style={styles.text}>\\n First PDF break\\n </Text>\\n <Text break style={styles.text}>\\n Second break\\n </Text>\\n</Page>\\n\\n
The result will appear as shown below:
\\nWith react-pdf, we can render dynamic text using the render
prop of the Text
component like so:
<Document>\\n <Page size=\\"A4\\">\\n <Text\\n style={styles.text}\\n render={({ pageNumber, totalPages }) =>\\n `Page ${pageNumber} of ${totalPages}`\\n }\\n fixed\\n />\\n </Page>\\n <Page>\\n <Text> Hello, second page!</Text>\\n </Page>\\n</Document>\\n\\n
Here, the render
prop has two arguments:
pageNumber
: The current index of the pagetotalPages
: The total number of pages that this document containsWe are displaying both of their values to the client:
\\nNote that the render
function is executed twice for <Text />
elements: once for layout on the page wrapping process, and again after it knows how many pages the document will have. Therefore, use it in cases where app performance is not a problem.
We can also use the render
prop on our View
element:
<View render={({ pageNumber }) => (\\n //detect if user is NOT on an even page:\\n pageNumber % 2 === 0 && (\\n <View style={{ background: \'red\' }}>\\n {/*If condition is fulfilled, display this component*/}\\n <Text>I\'m only visible in odd pages!</Text>\\n </View>\\n )\\n )} />\\n\\n
The react-pdf library is just one of the many tools available in the React ecosystem to help generate, render, annotate, or style PDF documents in a React application. Let’s explore other tools that you can use either in place of or alongside the react-pdf library and discuss their unique features and ideal use cases.
\\nreact-pdf-highlighter is a lightweight PDF annotation library built on the PDF.js package by Mozilla. You can use this package alongside other libraries that generate or render PDF documents (such as react-pdf) by integrating annotation features such as highlighting text in PDF documents after generating them.
\\nThe react-pdf-highlighter package undergoes regular updates and has 900+ stars on GitHub and 5K+ weekly downloads on npm.
\\nFeatures of react-pdf-highlighter include:
\\nQuickstart command:
\\nnpm i react-pdf-highlighter\\n\\n
react-pdf-tailwind is not exactly a PDF generator or renderer. It’s more of a utility tool that allows you to style PDF documents created with libraries, such as the react-pdf library, using Tailwind utility classes.
\\nBecause it’s built on Tailwind, it undergoes constant updates. It has over 300 stars on GitHub and 6K+ weekly downloads on npm. The react-pdf-tailwind library is basically a wrapper for Tailwind, so it doesn’t have any distinct features other than to style PDF documents using Tailwind utility classes.
\\nQuickstart command:
\\nnpm i react-pdf-tailwind\\n\\n
react-print-pdf is an across-the-board solution for creating PDF documents in a React application. Unlike other solutions, react-print-pdf gives you full control over your document’s layout. You can design complex and customized layouts with features like footnotes, headers, margins, and more.
\\nThis open source package has good community support, undergoes constant updates, and has over 2K+ stars on GitHub.
\\nFeatures of react-print-pdf include:
\\nQuickstart command:
\\nnpm install @onedoc/react-print\\n\\n
After generating PDFs, the next step is to accurately display them in your application.
\\n\\nPDFs can be displayed in React apps using various methods, including standard HTML elements and PDF viewing libraries like the ones mentioned above. These approaches provide flexibility depending on your use case, from simple embedding to advanced rendering.
\\nA commonly used method for doing so is using the <iframe>
element. You can embed the PDF file by setting its src
attribute to the PDF’s file path:
import React from \'react\';\\n\\nconst IframePDFViewer = () => {\\n return (\\n <div>\\n <iframe \\n src=\\"/document.pdf\\" //specify the path the PDF file\\n title=\\"Sample PDF \\"\\n style={{width: \'600px\', height: \'500px\'}} //specify styling options\\n /> \\n </div>\\n );\\n};\\nexport default IframePDFViewer;\\n\\n
Another method is using the <embed>
element. It provides a straightforward method to display PDF files directly on a page but offers limited customization and flexibility in how the PDF is rendered:
import React from \'react\';\\n\\nconst EmbedPDFViewer = () => {\\n return (\\n <div>\\n <embed\\n src=\\"/document.pdf\\" //specify the path the PDF file\\n type=\\"application/pdf\\" //specify the type of file \\n width=\\"100%\\"\\n height=\\"600px\\"\\n />\\n </div>\\n );\\n};\\nexport default EmbedPDFViewer;\\n\\n
Another method for displaying PDFs in React apps is using the <object>
element. This element defines a container for an external resource like PDFs. It also provides additional fallback for providing fallback content for incompatible browsers:
import React from \'react\';\\n\\nconst ObjectPDFViewer = () => {\\n return (\\n <div>\\n <object\\n data=\\"/document.pdf\\" //specify the path the PDF file\\n type=\\"application/pdf\\" //specify the type of file \\n width=\\"600px\\"\\n height=\\"600px\\">\\n <p>Here\'s a link to <a href=\\"/document.pdf\\">the PDF</a>instead.</p>\\n </object> \\n </div>\\n );\\n};\\nexport default ObjectPDFViewer;\\n\\n
N.B., static assets such as PDF files are typically placed in the public
folder of your project. This folder is directly accessible via the root URL of your application, making it an ideal location for serving files that need to be publicly accessible.
We can also use the wojtekmaj/react-pdf package to display PDFs in React applications. This approach comes with control and flexibility in how the PDFs are displayed. The package is designed solely for displaying or rendering PDF documents, making it suitable for use alongside a PDF generator such as the react-pdf package.
\\nwojtekmaj/react-pdf has excellent community support and undergoes constant updates. Additionally, it boasts over 9K+ stars on GitHub and 900K+ weekly downloads on npm. With such popularity, you can be assured that it is suitable for use in your applications.
\\nFeatures of the wojtekmaj/react-pdf library include:
\\nQuickstart command:
\\nnpm i react-pdf\\n\\n
An example use case is using the library to display PDFs one page at a time:
\\nimport React, { useState } from \\"react\\";\\nimport { Document, Page } from \\"react-pdf\\";\\n\\nexport default function DisplayPDF() {\\n\\n const [numPages, setNumPages] = useState(null);\\n const [pageNumber, setPageNumber] = useState(1);\\n\\n const filePath = \\"/document.pdf\\";\\n\\n function onDocumentLoadSuccess({ numPages }) {\\n setNumPages(numPages);\\n }\\n\\n function changePage(offset) {\\n setPageNumber((prevPageNumber) => {\\n const newPageNumber = prevPageNumber + offset;\\n if (newPageNumber >= 1 && newPageNumber <= numPages) {\\n return newPageNumber;\\n }\\n return prevPageNumber;\\n });\\n }\\n\\n return (\\n <>\\n <Document\\n file={filePath}\\n options={{ workerSrc: \\"/pdf.worker.js\\" }}\\n onLoadSuccess={onDocumentLoadSuccess}\\n >\\n <Page pageNumber={pageNumber} />\\n </Document>\\n <div>\\n <p>\\n Page {pageNumber || (numPages ? 1 : \\"--\\")} of {numPages || \\"--\\"}\\n </p>\\n <button\\n type=\\"button\\"\\n disabled={pageNumber <= 1}\\n onClick={() => changePage(-1)}\\n >\\n Previous\\n </button>\\n <button\\n type=\\"button\\"\\n disabled={pageNumber >= numPages}\\n onClick={() => changePage(1)}\\n >\\n Next\\n </button>\\n </div>\\n </>\\n );\\n}\\n\\n
In this code:
\\npageNumber
and numPages
are initially defined to manage the current page and the total number of pages<Document>
component loads the PDF file (/document.pdf
) and uses the onLoadSuccess
callback to retrieve the total number of pagesoptions
property specifies the path to the PDF worker script required by the library for efficient rendering<Page>
component displays the current PDF page based on the pageNumber
pageNumber
state accordinglyIn this article, we covered the fundamentals of the react-pdf library. Not only is it secure and robust, but it is also lightweight, thus bringing good performance to the table. We also covered different methods of displaying PDFs in the browser and how to display PDFs one page at a time.
\\nThank you for reading. Happy coding!
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nHandling dynamic, structured data is a common challenge in modern web applications. Whether building spreadsheets, surveys, or data grids, developers need forms that can adapt to user input. Angular’s FormArray
is a powerful container tool designed for this purpose.
FormArray
makes it easy to create and manage dynamic rows and columns of input fields, providing a seamless way to build spreadsheet-like interfaces.
In this guide, you’ll learn how to:
\\nFormArray
containerBy the end of this guide, you’ll have a functional pseudo-spreadsheet application and a strong understanding of how Angular’s reactive forms simplify complex, dynamic data handling.
\\nLet’s get started!
\\nTo get started, ensure you have Node.js and the Angular CLI installed. To create a new Angular project, run the following command:
\\nng new dynamic-formarray-app\\n
During setup, enable routing (by running Yes
) and choose your preferred CSS preprocessor. Once the project is created, navigate to the project folder and install the necessary dependencies, including Bootstrap for styling:
npm install bootstrap\\n
Add Bootstrap to angular.json
under the styles
array:
\\"styles\\": [\\n \\"node_modules/bootstrap/dist/css/bootstrap.min.css\\",\\n \\"src/styles.css\\"\\n]\\n
Add PapaParse for robust CSV parsing:
\\nnpm install papaparse\\n
Finally, generate a new component for the spreadsheet interface:
\\nng generate component components/spreadsheet\\n
The Angular project is now set up and ready for development.
\\nTo dynamically generate form controls, we first need to upload and parse a CSV file. Add a file input element to your template:
\\n<div class=\\"mb-3\\">\\n <label for=\\"csvFile\\" class=\\"form-label\\">Upload CSV File:</label>\\n <input\\n type=\\"file\\"\\n id=\\"csvFile\\"\\n class=\\"form-control\\"\\n accept=\\".csv\\"\\n (change)=\\"onFileUpload($event)\\"\\n />\\n</div>\\n
In your component file (spreadsheet.component.ts
), use Angular’s FormBuilder
and PapaParse to process the uploaded file:
import { Component, OnInit } from \'@angular/core\';\\nimport { FormArray, FormBuilder, FormControl, FormGroup, Validators } from \'@angular/forms\';\\nimport * as Papa from \'papaparse\';\\n\\n@Component({\\n selector: \'app-spreadsheet\',\\n templateUrl: \'./spreadsheet.component.html\',\\n styleUrls: [\'./spreadsheet.component.css\']\\n})\\nexport class SpreadsheetComponent implements OnInit {\\n spreadsheetForm!: FormGroup;\\n\\n constructor(private fb: FormBuilder) {}\\n\\n ngOnInit(): void {\\n this.spreadsheetForm = this.fb.group({ rows: this.fb.array([]) });\\n }\\n\\n get formArray(): FormArray {\\n return this.spreadsheetForm.get(\'rows\') as FormArray;\\n }\\n\\n onFileUpload(event: Event): void {\\n const file = (event.target as HTMLInputElement).files?.[0];\\n if (file) {\\n Papa.parse(file, {\\n complete: (result) => this.loadCsvData(result.data),\\n skipEmptyLines: true\\n });\\n }\\n }\\n\\n loadCsvData(data: any[]): void {\\n const rows = this.formArray;\\n rows.clear();\\n data.forEach((row) => {\\n const formRow = this.fb.array(row.map((value: string) => this.fb.control(value, Validators.required)));\\n rows.push(formRow);\\n });\\n }\\n}\\n
The code snippet above achieves the following:
\\n<input>
element captures the file and triggers the onFileUpload
methodFormArray
: Each row in the CSV becomes a FormArray
of FormControl
s, allowing Angular to manage the data reactivelyAfter parsing the data, the next step is rendering it dynamically in a grid that mimics a spreadsheet. Each row in the FormArray
corresponds to a FormArray
of cells, represented as FormControl
instances.
In the template (spreadsheet.component.html
), use Angular’s structural directives to display rows and cells:
<form [formGroup]=\\"spreadsheetForm\\">\\n <div *ngFor=\\"let row of formArray.controls; let i = index\\" class=\\"row mb-2\\">\\n <div *ngFor=\\"let cell of (row as FormArray).controls; let j = index\\" class=\\"col\\">\\n <input\\n type=\\"text\\"\\n [formControl]=\\"cell\\"\\n class=\\"form-control\\"\\n [ngClass]=\\"{ \'is-invalid\': cell.invalid && cell.touched }\\"\\n placeholder=\\"Cell {{ i + 1 }}, {{ j + 1 }}\\"\\n />\\n <div *ngIf=\\"cell.invalid && cell.touched\\" class=\\"invalid-feedback\\">\\n <span *ngIf=\\"cell.hasError(\'required\')\\">This field is required.</span>\\n </div>\\n </div>\\n </div>\\n</form>\\n
Here’s what’s happening in the code block above:
\\n*ngFor
loops over the FormArray
rows, creating a <div>
for each row*ngFor
loops through the cells, rendering an <input>
for each FormControl
Validation ensures that the input meets specific criteria. Angular supports built-in validators like Validators.required
and allows for custom validation logic.
Create a custom validator to ensure numeric input:
\\nfunction validateNumeric(): ValidatorFn {\\n return (control: AbstractControl): { [key: string]: any } | null => {\\n const value = control.value;\\n return isNaN(value) || value.trim() === \'\' ? { numeric: true } : null;\\n };\\n}\\n
Update the loadCsvData
method to include this validator:
loadCsvData(data: any[]): void {\\n const rows = this.formArray;\\n rows.clear();\\n data.forEach((row) => {\\n const formRow = this.fb.array(\\n row.map((value: string) => this.fb.control(value, [Validators.required, validateNumeric()]))\\n );\\n rows.push(formRow);\\n });\\n}\\n
Once the user modifies the form, allow them to download the updated data as a CSV file using the Blob API.
\\nHere is the code for the CSV export:
\\ndownloadCsv(): void {\\n const headers = [\'Column 1\', \'Column 2\', \'Column 3\'];\\n const rows = this.formArray.controls.map((row) =>\\n (row as FormArray).controls.map((control) => control.value)\\n );\\n\\n const csvArray = [headers, ...rows];\\n const csvData = csvArray.map((row) => row.join(\',\')).join(\'\\\\n\');\\n\\n const blob = new Blob([csvData], { type: \'text/csv;charset=utf-8;\' });\\n const url = window.URL.createObjectURL(blob);\\n\\n const link = document.createElement(\'a\');\\n link.href = url;\\n link.setAttribute(\'download\', \'modified-data.csv\');\\n document.body.appendChild(link);\\n link.click();\\n document.body.removeChild(link);\\n window.URL.revokeObjectURL(url);\\n}\\n
Finally, we’ll include a button in the template to trigger the download:
\\n<button type=\\"button\\" class=\\"btn btn-secondary\\" (click)=\\"downloadCsv()\\">\\n Download Modified CSV\\n</button>\\n
And that’s it! We’ve successfully built a fully functional pseudo-spreadsheet application capable of dynamically generating form controls, validating user inputs, and exporting modified data — all powered by Angular’s FormArray
.
By following this guide, you learned how to:
\\nFormArray
This solution is highly adaptable, making it suitable for various real-world scenarios like data grids, surveys, or interactive spreadsheets.
\\n\\nBy mastering Angular’s FormArray
, you can build flexible, dynamic form applications that meet real-world needs, such as data grids, spreadsheets, and surveys. Now you have the tools to simplify complex form handling with Angular.
Happy coding!
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nEditor’s note: This article was last updated on 28 December 2023 to explore integrating MUI with React iframes for styling, and handling events in React iframes.
\\nWhen building webpages, developers often need to include resources from other webpages. Some common examples that you may recognize from browsing the web include the share button from X ( formerly Twitter), the like button on Facebook, and the map display from Google Maps.
\\nA popular way to retrieve this type of data is with an iframe. Short for inline frame, an iframe is essentially a frame within a frame. Using the <iframe/>
tag, you can easily embed external content from other sources directly into your webpage. Developers also use iframes to isolate certain resources from the same webpage, for example, encapsulating components in iframes by rendering them.
In this tutorial, we’ll explore iframes in React by looking at these two different use cases. First, we’ll cover some background information about how iframes work and how we should use them. Let’s get started!
\\nWhen a resource is rendered in an iframe, it functions independently of the parent component where it is embedded. Therefore, neither the parent component’s CSS styling nor its JavaScript will have any effect on the iframe.
\\nIn React, developers use iframes to create either a sandboxed component or an application that is isolated from its parent component. In an iframe, when a piece of content is embedded from an external source, it is completely controlled by the source instead of the website it is embedded in.
\\nFor this reason, it’s important to embed content only from trusted sources. Keep in mind also that iframes use up additional memory. If the content is too large, it can slow down your webpage load time, so you should use iframes carefully.
\\nFirst, let’s learn how to embed pages from external sources, which is probably the more common use case of iframes. Nowadays, you rarely see a web app that doesn’t have any content loaded from an external source.
\\nFor example, consider how many YouTube videos you find on webpages, Instagram posts you see outside of the app, Facebook comment sections on blogs, and even ads on webpages. Each of these elements is embedded into the website, which can range in complexity from a single line of code to an entire code section.
\\nWe can use the following line of code to add an X post button to a React app:
\\n<iframe src=\\"https://platform.twitter.com/widgets/tweet_button.html\\" ></iframe>\\n\\n
We’ll use the code snippet above in the following code to generate a post button like the one seen in the following screenshot. When a user clicks the post button, the selected content will open in a new post on their X homepage:
\\nfunction App() {\\n return (\\n <div className=\\"App\\">\\n <h3>Iframes in React</h3>\\n <iframe src=\\"https://platform.twitter.com/widgets/tweet_button.html\\"></iframe>\\n </div>\\n );\\n}\\n\\nexport default App;\\n\\n
Now, let’s review some useful attributes of the iframe tag, which will allow you to modify and customize your iframes. For one, src
is used to set the address of the webpage that you want to embed. For example, we can use the src
tag to embed a YouTube video as follows:
<iframe src=\\"https://www.youtube.com/embed/uXWycyeTeCs\\" ></iframe>\\n\\n
srcdoc
is used to set an inline HTML to embed. Note that the srcdoc
attribute will override the src
attribute if both are present. In the code snippet below, we’ll override the src
command for a YouTube video with the srcdoc
command, which uses a hello message as a placeholder:
<iframe src=\\"https://www.youtube.com/embed/uXWycyeTeCs\\" srcDoc=\'<p>Hello from Iframe</p>\' ></iframe>\\n\\n
We’ll use height
and width
attributes to set the dimensions of our iframe. The default unit is pixels, but you can use other units as well. In the code snippet below, we’ll set the dimensions of an iframe that displays a YouTube video, as seen in the following screenshot:
<iframe src=\\"https://www.youtube.com/embed/uXWycyeTeCs\\" width={1000} height={500} ></iframe>\\n\\n
The allow
attribute sets the features available to the <iframe>
based on the origin of the request, for example, accessing autoplay, microphone, and more.
In the screenshot below, we set allow
for our YouTube video with the following values: accelerometer
, autoplay
, clipboard-write
, encrypted-media
, gyroscope
, and picture-in-picture full
:
<iframe src=\\"https://www.youtube.com/embed/uXWycyeTeCs\\" width={1000} height={500} allow=\\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture full\\"></iframe>\\n\\n
The title
attribute is used to set a description for the content in the iframe. While the title
attribute has no effect on the UI of the iframe, it is helpful for accessibility, providing valuable input to screen readers. In the code snippet below, we’re adding a title
to our YouTube video:
<iframe src=\\"https://www.youtube.com/embed/uXWycyeTeCs\\" width={1000} height={500} title=\'A youtube video on React hooks\'></iframe>\\n\\n
Next, we’ll use the name
attribute to set the iframe name and use it to reference the element in JavaScript. Similarly, you can also set the name
attribute as the value of the target
attribute of an a
or form
element or the value of the formtarget
attribute of an input
or button
element.
To set restrictions on the content of the iframe, we use the Sandbox
attribute. As mentioned earlier in the tutorial, we can’t control the content sent from an external source, however, we can restrict what we accept in the iframe using Sandbox
. To apply all restrictions, leave the value of the attribute empty. Or, you can add flags to relax the restrictions.
For example, allow-modals
will allow the external page to open modals, and allow-scripts
will allow the resource to run scripts:
<iframe src=\\"https://www.youtube.com/embed/uXWycyeTeCs\\" width={1000} height={500} sandbox=\'allow-scripts allow-modal\' ></iframe>\\n\\n
Properly using the Sandbox
attribute and its flags can greatly improve your app’s security, especially if the resource you’re embedding is from a third party.
To set how the browser should load the iframe content, we’ll use the loading
attribute; loading
takes either eager
or lazy
. When set to eager
, the iframe is loaded immediately, even if it is outside the visible viewport, which is the default value. lazy
delays loading until it reaches a calculated distance from the viewport, as defined by the browser:
<iframe src=\\"https://www.youtube.com/embed/uXWycyeTeCs\\" width={1000} height={500} sandbox=\'allow-scripts allow-modal\' loading=\'eager\'></iframe>\\n\\n
When you don’t need to control the content that is being rendered in an iframe, the method we just covered is a good strategy. If you want to show your page visitors a video on YouTube or a post on Instagram, you can simply add the URL to the src
attribute.
But what if you are building a complex app, for example, a CodeSandbox that allows users to build standalone apps on the same platform, or a chatbot that gets triggered when a user clicks a button? In both examples, you are in control of the content, but you also want to isolate them from the rest of the app.
\\nIn this section, we’ll explore rendering a React app or component in an iframe. This is a good strategy when you want to cut CSS excesses or use a full-fledged app in another app without any interferences, especially when you want the content of the iframe to share state with its parent.
\\nLet’s try to render our iframe as a direct child of the iframe:
\\nfunction App() {\\n return (\\n <div className=\\"App\\">\\n <p>Iframes in React</p>\\n <iframe>\\n <MyComponent />\\n </iframe>\\n </div>\\n );\\n}\\n\\nexport default App;\\n\\n
However, when we run the code above, we get nothing:
\\nYou can only use the src
attribute to set the URL that we want to render. Because we are trying to render a component in the same app, the src
attribute won’t work.
Alternatively, we could use the srcdoc
attribute, which takes in an inline HTML to embed. However, we’re then trying to render an entire app or component, which uses extensive and verbose code. We need to find a way to render the component in the iframe body instead of as a child of it. For that, we’ll use a portal.
According to the React documentation, portals allow us to render children into a DOM node that exists outside of the parent component’s DOM hierarchy. Basically, portals let us render children wherever we want to.
\\nYou can create a portal with the following command:
\\nReactDOM.createPortal(children, domNode, key?)\\n\\n
In this case, the children
parameter can be a piece of JSX, a React Fragment, a string, a number, or an array of these. The domNode
parameter is the DOM location or node to which the portal should be rendered. The optional key
parameter is a unique string or number React will use as the portal’s key.
With a React portal, we can choose where to place a DOM node in the DOM hierarchy. To do so, we’ll first establish a reference to an existing and mounted DOM node. In this case, it would be in the contentWindow
of a given <iframe>
. Then, we’ll create a portal with it. The portal’s contents are also considered children of the parent’s virtual DOM.
Let’s say we have the following file called MyComponent.js
:
import React from \\"react\\";\\n\\nfunction MyComponent() {\\n return (\\n <div>\\n <p style={{ color: \\"red\\" }}>Testing to see if my component renders!</p>\\n </div>\\n );\\n}\\n\\nexport default MyComponent;\\n\\n
Now, let’s create a file called CustomIframe.js
and write the following code:
import React, { useState } from \'react\'\\nimport { createPortal } from \'react-dom\'\\n\\nconst CustomIframe = ({\\n children,\\n ...props\\n}) => {\\n const [contentRef, setContentRef] = useState(null)\\n\\n const mountNode =\\n contentRef?.contentWindow?.document?.body\\n\\n return (\\n <iframe {...props} ref={setContentRef}>\\n {mountNode && createPortal(children, mountNode)}\\n </iframe>\\n )\\n}\\n\\nexport default CustomIframe;\\n\\n
We created a ref
with the useState()
Hook, therefore, once the state is updated, the component will re-render. We also got access to the iframe document body, and then created a portal to render the children passed to iframe
in its body instead:
import \\"./App.css\\";\\nimport CustomIframe from \\"./CustomIframe\\";\\nimport MyComponent from \\"./MyComponent\\";\\n\\nfunction App() {\\n return (\\n <CustomIframe title=\\"A custom made iframe\\">\\n <MyComponent />\\n </CustomIframe>\\n );\\n}\\n\\nexport default App;\\n\\n
You can pass any React app or component as a child of CustomIframe
, and it will work just fine! The React app or component will become encapsulated, meaning you can develop and maintain it independently.
You can also achieve the same encapsulation as above using a library called React <Frame /> component. To install it, run the following command:
\\nnpm install --save react-frame-component\\n\\n
Encapsulate your component as follows:
\\nimport Frame from \'react-frame-component\';\\n\\nfunction App() {\\n return (\\n <div className=\'App\'>\\n <p>Iframes in React</p>\\n <Frame >\\n <MyComponent />\\n </Frame>\\n </div>\\n );\\n}\\n\\nexport default App;\\n\\n
As explained above, the content of an iframe is a complete document with its markup and static assets like JavaScript, images, videos, fonts, and styling. The default Material UI (MUI) settings will not apply styling to React components rendered inside an iframe out of the box. An iframe is an isolated environment, therefore you need to integrate MUI with the iframes in your React project.
\\nBy default, MUI uses emotion as its styling engine under the hood, and the generated styles are injected into the parent frame’s head
element. However, the styles aren’t inserted into the iframe out of the box because the iframe’s document is different.
You can use the @emotion/cache
package with the <CacheProvider/>
component to get emotion to work within embedded contexts like iframes. The @emotion/cache
package is not installed by default. You need to install it from the npm package registry like so:
npm i @emotion/cache\\n\\n
After installing @emotion/cache
, you can add the changes below to the CustomIframe
component wrapper we created while learning how to use the React portal to render a component in an iframe:
import React, { useState } from \\"react\\";\\nimport { createPortal } from \\"react-dom\\";\\n\\nimport { CacheProvider } from \\"@emotion/react\\";\\nimport createCache from \\"@emotion/cache\\";\\n\\nconst CustomIframe = ({ children, ...props }) => {\\n const [contentRef, setContentRef] = useState(null);\\n\\n const cache = createCache({\\n key: \\"css\\",\\n container: contentRef?.contentWindow?.document?.head,\\n prepend: true,\\n });\\n\\n const mountNode = contentRef?.contentWindow?.document?.body;\\n\\n return (\\n <CacheProvider value={cache}>\\n <iframe {...props} ref={setContentRef}>\\n {mountNode && createPortal(children, mountNode)}\\n </iframe>\\n </CacheProvider>\\n );\\n};\\n\\nexport default CustomIframe;\\n\\n
In the code above, we wrapped the iframe inside the <CacheProvider/>
component. The generated styles will be inserted into the iframe’s head
element. You can also insert them into a DOM node other than the head
element.
As explained above, communication between an iframe and its parent frame is subject to the same origin restriction because of security concerns. Therefore, most features are restricted if the iframe is from a different origin than the parent frame.
\\nHowever, the parent frame can still communicate with an embedded iframe via events using the window.postMessage()
API, though they may not be from the same origin.
For the same-origin frames, you can access the iframe from its parent. Similarly, you can also access the parent frame from the iframe.
\\nAn iframe is a document with assets like JavaScript, CSS, images, and audio. It will take a bit of time to load. Therefore, it’s good practice to render a loading indicator while the iframe is still loading to provide a good user experience.
\\nA fully loaded iframe will emit the onLoad
event. You can update the loading state in the onLoad
event handler to remove the loading indicator when the iframe has finished loading:
function App() {\\n const [loadingIframe, setLoadingIframe] = useState(true);\\n\\n return (\\n <>\\n <iframe\\n src=\\"https://www.youtube.com/embed/uXWycyeTeCs\\"\\n onLoad={() => setLoadingIframe(false)}\\n ></iframe>\\n {loadingIframe ? <p> Loading iframe</p> : null}\\n </>\\n );\\n}\\n\\n
As hinted above, the content of an iframe is a complete document with markup and other static assets like images, styles, and JavaScript. These assets use bandwidth, take time to load, and use memory after loading. Therefore, it is necessary to optimize the iframe page load.
\\nIf your iframe renders a video, you can optimize it by rendering a simple thumbnail or placeholder image and loading the iframe after the user clicks it. It ensures faster page load, and the user only downloads and watches the video if it is necessary to do so.
\\nFurthermore, you can optimize the iframe load by setting the iframe’s loading
attribute to lazy
. If you load all the iframe resources as soon as the page loads, you pay the ultimate penalty of loading off-screen content that the user may not see or interact with:
<iframe\\n src=\\"https://www.youtube.com/embed/uXWycyeTeCs\\"\\n width={1000}\\n height={500}\\n loading=\\"lazy\\"\\n></iframe>;\\n\\n
Doing so will defer loading off-screen iframes until the user scrolls near them. As a result, it increases page load speed, saves bandwidth, and reduces your site’s memory usage.
\\nOmitting the loading
attribute is equivalent to setting its value to eager
. The browser eagerly loads the iframe as soon as the page loads, irrespective of whether the user will see or interact with it.
In this tutorial, we explored iframes in React in two different use cases. For one, we learned how to embed external content from a webpage into a web application with an iframe, separating the JavaScript and CSS of the parent and child elements. Secondly, we learned how to isolate certain parts of our app in an iframe.
\\niframes are a useful element that is essential for developers to learn. I hope you enjoyed this tutorial!
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nAs your web app grows in complexity, it becomes essential to master the art of debugging.
\\nEffective JavaScript debugging involves more than just fixing errors. It requires an understanding of how your code works under the hood to ensure your app runs smoothly and delivers the best user experience.
\\nMinified code, which is the version of your code that reaches users in production, is optimized for performance. However, minified code can be a nightmare to debug. When users encounter errors, reproducing and diagnosing issues in minified code is often challenging.
\\nHowever, with the right tools, JavaScript debugging can become much easier. This article will explore how to leverage source maps to debug minified code and dive into other techniques using Chrome DevTools to efficiently identify and resolve issues in your web app.
\\nWe’ll work on a simple app that increments a count and logs it onto the console. This app demonstrates how minified code can make debugging tricky and how source maps can help simplify the process.
\\nCreate the .js
files below and add the code snippets as shown:
1. src/counterCache.js
export const countCache = { \\n previousCount: 0, \\n currentCount: 0, \\n totalCount: 0 \\n}\\nexport function updateCache(currentCount, previousCount) { \\n countCache.currentCount = currentCount; \\n countCache.previousCount = previousCount; c\\n ountCache.totalCount = countCache.totalCount + countCache.currentCount; \\n}\\n
2.src/counter.js
:
import { updateCache } from \'./counterCache.js\';\\nlet count = 0; \\nexport function incrementCounter() \\n { count += 1; \\n const previousCount = count; \\n updateCache(count, previousCount); \\n}\\n
3.src/index.js
:
import { incrementCounter } from \'./counter\';\\nimport { countCache } from \'./counterCache\';\\nconst button = document.createElement(\'button\');\\nconst previousElement = document.getElementById(\'previous\');\\nconst currentElement = document.getElementById(\'current\');\\nconst totalElement = document.getElementById(\'total\');\\nbutton.innerText = \'Click me\';\\ndocument.body.appendChild(button);\\nbutton.addEventListener(\'click\', () => {\\n incrementCounter();\\n previousElement.innerText = countCache.previousCount;\\n currentElement.innerText = countCache.currentCount;\\n totalElement.innerText = countCache.total();\\n});\\n
In your package.json
file, add the webpack packages as shown below then run npm i
to install them. We’ll use webpack as part of the build process to generate minified code for production:
\\"devDependencies\\": {\\n \\"webpack\\": \\"^5.96.1\\",\\n \\"webpack-cli\\": \\"^5.1.4\\"\\n }\\n\\n
To enable code minification, add a webpack.config.js
file with the following snippet. Setting the mode to production
tells webpack to apply optimizations such as modification:
const path = require(\'path\');\\n module.exports = {\\n mode: \'production\', // Enables optimizations like minification and tree-shaking\\n entry: \'./src/index.js\', // Specifies the entry point of your application\\n output: {\\n path: path.resolve(__dirname, \'dist\'),// Defines the output directory for bundled files\\n filename: \'bundle.js\',// Specifies the name of the bundled file\\n },\\n };\\n
Now run npx webpack
to bundle and minify your code. The dist/bundle.js
file is generated with content as shown below. Minification obscures variable and function names, and removes unnecessary characters like whitespace, comments, and unused code, making the output file smaller and faster to load:
(()=>{\\"use strict\\";const t={};let e=0;const n=document.createElement(\\"button\\"),o=document.getElementById(\\"previous\\"),u=document.getElementById(\\"current\\"),r=document.getElementById(\\"total\\");n.innerText=\\"Click me\\",document.body.appendChild(n),n.addEventListener(\\"click\\",(()=>{var n,c;e+=1,n=e,c=e,t.currentCount=n,t.previousCount=c,t.totalCount=t.totalCount||0+t.currentCount,o.innerText=t.previousCount,u.innerText=t.currentCount,r.innerText=t.total()}))})();\\n
Next, update the index.html
file to reference the bundled output, ensuring your application uses the minified code:
<<!DOCTYPE html>\\n<html lang=\\"en\\">\\n<head>\\n <meta charset=\\"UTF-8\\">\\n <meta name=\\"viewport\\" content=\\"width=device-width, initial-scale=1.0\\">\\n <title>Web Debugging Example</title>\\n <link rel=\\"stylesheet\\" href=\\"styles.css\\"> \\n</head>\\n<body>\\n <h1>Web Debug App</h1>\\n <p>Check console for bug</p>\\n <table>\\n <thead>\\n <tr>\\n <th>Previous count</th>\\n <th>Current count</th>\\n <th>Total count</th>\\n </tr>\\n </thead>\\n <tbody>\\n <tr>\\n <td id=\\"previous\\">0</td>\\n <td id=\\"current\\">0</td>\\n <td id=\\"total\\">0</td>\\n </tr>\\n </table>\\n\\n <script src=\\"./dist/bundle.js\\" ></script> <!-- Include the bundled output --\x3e\\n</body>\\n</html>\\n\\n
Finally, run the app and check the console after clicking the button. To preview the app locally, you can use the Live Server extension in VS Code:
\\nThe error in the console, t.total is not a function
, is difficult to interpret. Clicking on the file in the console does not help pinpoint the issue due to the compact and obfuscated nature of minified code. Identifying the root cause of such an error in a large codebase can be frustrating and time-consuming, as the minified code obscures the original logic and context.
Let’s demonstrate eight methods to help make JavaScript debugging a bit easier:
\\nSource maps are files that map your minified code back to the original source code. They make debugging easier and help investigate issues in production. The file names of source maps end with .map
.
To generate source maps using webpack, update the webpack.config.js
file as follows:
The devtool: \'source-map\'
or devtool: \'eval-source-map\'
line tells webpack to generate an external .map
file which maps the minified code back to your original source code. The source map file URL is also added to the minified code in the bundle.js
file.
Now run npx webpack
. The .map
file will generate alongside your minified bundle. Serve the application using a local server, and open it in an Incognito browser window. This prevents browser extensions and cached files from interfering with your debugging process.
With source maps generated, the following observations are made:
\\ncounter.js
file, which is the original source codebundle.js.map
is successfully fetched and is visible under the Developer resources tabThe exact code and file causing the bug are easy to identify using source maps:
\\nWith the clear error above, we are able to fix the error and access the correct property on countCache
.
Our guide on how to use Chrome DevTools should provide a great start. To open the Developer resources tab, click on the More icon, then More tools then Developer resources. This tab allows you to view the source map load status and even load source maps manually:
\\nThe code snippet below fixes the bug on the console. Update your code, then run npx webpack
to compile the changes. Once completed, serve the application and view the updated output in the table:
totalElement.innerText = countCache.totalCount;\\n\\n
Clicking the button currently updates the previous count, current count, and total on the table. The previous count is supposed to return the previous value of count
and the total count is to return the sum of all count values. At the moment, the previous count displays the current count while the total count is stuck at one.
In the next section, we’ll explore additional JavaScript debugging techniques, such as using breakpoints and stepping through the code, to identify and fix this issue:
\\nBreakpoints allow you to pause the execution of your code at specific lines, helping you inspect variables, evaluate expressions, and understand the code flow. Depending on your goal, there are different breakpoints you can use. For instance:
\\nIn our sample application, we’ll apply a breakpoint to the incrementCounter
function. On the Sources panel, open the counter.js
file and click to the left of line six. This sets a line-of-code breakpoint after the count is increased:
We’ll set another breakpoint at line five and edit it. To edit our breakpoint, we’ll right-click on the highlighted section and then click on Edit breakpoint:
\\nWe’ll set the breakpoint type to Logpoint, then enter the message to be logged to the console:
\\nBy clicking the button, our application pauses at the line-of-code breakpoint and prints a debug log on the console from the Logpoint set:
\\nFrom the image we can see the following sections:
\\nWith this, we can debug our app further.
\\nThe scope panel can be effective for JavaScript debugging, as it allows you to see variables from the original source:
\\nWe can see the following scope variables:
\\nFrom the scope panel and the log point breakpoint, we can see that the current count is one while the count before the increase is zero. We therefore need to store the count before the increment as the previous count.
\\nStepping through your code involves navigating through the program in different ways during JavaScript debugging:
\\nYou can use the debug controls to step through your code. The Step control enables you to run your code, one line at a time. Clicking on Step will execute line six and move to line seven. Note how the value of previousCount
changes in the scope:
The Step over control allows you to execute a function without going through it line by line:
\\nThe Step into control allows you to go into a function. In the function, you can step through the code line by line or Step out of the function as shown below. Stepping out of the function will finish the execution of the remaining lines:
\\n\\n
To fix our issue, we’ll update the code as shown below. This now displays the previous count on the table correctly:
\\nimport { updateCache } from \'./counterCache.js\';\\nlet count = 0;\\nexport function incrementCounter() {\\n const previousCount = count;\\n count += 1;\\n updateCache(count, previousCount);\\n}\\n\\n
The call stack shows the sequence of function calls that led to the current point in the code.
\\nAdd a new breakpoint in the counterCache.js
file as shown, then click the button. Observe the call stack panel:
There are three function calls made when the app executes line six of counterCache.js
. To observe the flow of any functions in the stack, you can restart their execution using Restart frame, as shown below:
When debugging, you may wish to ignore certain scripts during your workflow. This helps skip over the complexities of code from libraries or code generators. In our case, we want to ignore the counter.js
script while debugging.
On the Page tab, right-click on the file to be ignored and add the script to the ignore list:
\\nRunning the app and pausing on the breakpoint, we can see the incrementCounter
function is now ignored on the call stack. You can hide or show the ignored frames:
You can group your files in the Pages tab for easier navigation as shown in the image below:
\\nWatch expressions let you track specific variables or expressions as your code executes, helping you monitor changes in real time. You can add expressions like countCache
to monitor the value as you step through the code:
To try to fix the bug with the total count, you may run code snippets on the console to understand the logical error. When debugging code that you run repeatedly on the console, you can make use of Snippets.
\\nOn the Snippets tab, add a sample debug script, save the script then click Enter to run the script:
\\nYou can observe that the expression with the bug needs to be rearranged to fix the issue:
\\ncountCache.totalCount = (countCache.totalCount || 0) + currentCount;\\n\\n
You can explore additional resources on debugging web apps such as this article on debugging React apps with React DevTools, which offers valuable insights into debugging React-based applications. Additionally, this guide on debugging Node.js with Chrome DevTools provides tips for debugging server-side JavaScript using watchers and other advanced DevTools features. These resources can complement the techniques discussed here and broaden your understanding of debugging web apps.
\\nThis tutorial explored debugging minified code busing source maps and Chrome DevTools. By generating source maps, we mapped minified code back to its original source, making it easier to debug our web app. Chrome DevTools further enhanced the JavaScript debugging process with methods such as breakpoints, stepping through code, watch expressions, and more.
\\nWith these tools, developers can efficiently debug and optimize their applications, even when dealing with complex, minified codebases. The complete code for this project can be found on GitHub.
\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nEditor’s note: This article was last reviewed and updated by Elijah Asaolu in January 2025 to cover common errors with Fetch not being defined or failing, and to provide guidance on making GET
and POST
requests.
The stabilization of the Fetch API in Node.js has been one of the most anticipated upgrades in recent years, as it provides a standardized and modern approach to performing HTTP requests in both the browser and server environments. To better understand why this is such a big deal, let’s explore the history of HTTP requests, how Fetch came to be, and what its stabilization means for Node developers.
\\nIn the early days of the web, it was difficult to perform asynchronous requests across websites; developers had to use clumsy approaches to interact across multiple networks.
\\nInternet Explorer 5 changed this in 1998 with the introduction of the XMLHttpRequest
API. Initially, XMLHttpRequest
was designed to fetch XML data via HTTP, hence the name. Sometime after it was released, however, support for other data formats — primarily JSON, HTML, and plaintext — was added.
The XMLHttpRequest
API worked like a charm back then, but as the web grew, it became so difficult to work with that JavaScript frameworks (notably jQuery) had to abstract it to make implementation easier and success/error handling smoother.
In 2015, the Fetch API was launched as a modern successor to XMLHttpRequest
, and it has subsequently become the de facto standard for making asynchronous calls in web applications. One significant advantage Fetch has over XMLHttpRequest
is that it leverages promises, allowing for a simpler and cleaner API while avoiding callback hell.
Though the Fetch API has been around for a while now, it wasn’t included in the Node.js core because of some limitations. In a question answered by one of Node’s core contributors, it was noted that the browser’s Fetch API implementation is dependent on a browser-based Web Streams API and the AbortController
interface (for aborting fetch requests), which wasn’t available in Node until recently. As such, it was difficult to choose the best approach to include it in the Node core.
Long before the addition of the Fetch API, the request module was the most popular method for making HTTP requests in Node. But the JavaScript ecosystem at large quickly evolved, and newly introduced patterns made the request module obsolete. A crucial example here is async/await
; there was no provision for this in the request API, and the project was later deprecated due to these limitations.
In 2018, Undici was introduced as a newer and faster HTTP/1.1 client for Node.js, with support for pipelining and pooling, among other features. The Node core team worked hard on Undici, fixing performance issues, guaranteeing stability, and aligning the library with the Node project’s goals.
\\nUndici served as the foundation for Node.js’ native fetch()
implementation, which provided a high-performance, standards-compliant solution for performing HTTP requests. With the integration of Undici into the Node.js core, developers obtained access to a strong and fast HTTP client, laying the foundation for the inclusion of the Fetch API in Node.js v18 and finally stable in v21.
As previously mentioned, Fetch was added to the Node.js core in v18. However, until v21, it was mostly experimental, with unavoidable bugs and instabilities. The stable release in v21 is a big milestone for developers, as it means that it has been tested thoroughly, so you can trust it to work in different scenarios without surprises.
\\nThe Fetch API is provided as a high-level function, which means you don’t have to do any import/require
before using it in your Node.js applications.
In its most basic version, it takes a URL, sends a GET
request to that URL (if the request method is not defined), and produces a promise that resolves the response. Here’s an example below:
fetch(\\"http://example.com/api/endpoint\\")\\n .then((response) => {\\n // Do something with response\\n })\\n .catch((err) => {\\n // Handle error here\\n console.log(\\"Unable to fetch -\\", err);\\n });\\n\\n
You can also change how the fetch
process is carried out by appending an optional object after the URL, which allows you to change things like request methods, request headers, and other options:
fetch(\\"http://example.com/api/endpoint\\", {\\n method: \\"POST\\", // Specify request method\\n headers: {\\n // Customize request header here\\n \\"Content-Type\\": \\"application/json\\",\\n // . . .\\n },\\n body: JSON.stringify({\\n foo: \\"bar\\",\\n bar: \\"foo\\",\\n }),\\n // . . .\\n})\\n .then((res) => res.json())\\n .then((data) => {\\n console.log(\\"Response data:\\", data);\\n })\\n .catch((err) => {\\n console.log(\\"Unable to fetch -\\", err);\\n });\\n\\n
As shown in the updated example above, we’re now sending a POST
request to the example API endpoint with body data foo
and bar
while also setting the content type to application/json
via the headers option. Next, we use the first then
to return the response as JSON and handle the converted JSON response in the second then
.
The fact that the Fetch API now comes prepackaged as an inbuilt Node module is extremely beneficial to the developer community. Some of these benefits include:
\\nInbuilt fetch for Node.js might mean the end for packages like node-fetch, got, cross-fetch, and many others that were built for the same purpose. This means you won’t have to conduct an npm install
before performing network operations in Node.
Furthermore, node-fetch, currently the most popular fetch package for Node.js, was recently switched to an ESM-only package. This means you’re unable to import it with the Node require()
function. The native Fetch API will make HTTP fetching in Node environments feel much smoother and more natural.
Developers who have previously used the Fetch API on the front end will feel right at home using the inbuilt fetch()
method in Node. It’ll be much simpler and more intuitive than using external packages to achieve the same functionality in a Node environment.
As mentioned previously, the new Fetch implementation is also based on Undici. As a result, you should anticipate improved performance from the Fetch API, as well.
\\nThe browser’s Fetch API has some drawbacks in and of itself, and these will undoubtedly be transferred to the new Node.js Fetch implementation:
\\nThe Node.js Fetch API does not have built-in support for progress events during file uploads or downloads. Developers who want fine-grained control over monitoring progress may find this limitation difficult to overcome and may need to look into alternative libraries or solutions.
\\nWhile the Fetch API is stable in newer Node.js versions, projects using earlier Node.js versions may encounter compatibility difficulties. Developers working in environments where upgrading Node.js is not possible may be forced to rely on alternative libraries or develop workarounds.
\\nThe Fetch API’s approach to cookie management is based on browser behavior, which may result in unexpected outcomes in a Node.js environment. When dealing with cookies, developers must exercise caution and ensure accurate configuration for specific use cases.
\\nMigrating to the official, stable Node.js Fetch API is pretty straightforward, as outlined below:
\\nFirst, ensure that your Node.js version is at least v21 or higher. You can check your current Node.js version using the following command:
\\nnode -v\\n\\n
If your Node.js version is below version 21, upgrade to the latest version. You can download the latest version directly from the official Node.js website or use a version manager like nvm (Node Version Manager). For example, you can use NVM to list all available versions of Node.js by running the following command:
\\nnvm ls-remote\\n\\n
Once you’ve seen a preferred version, you can then install it by running:
\\nnvm install x.y.z\\n# E.g to install v22.13.0:\\nnvm install v22.13.0\\n\\n
If you were using an external library for making requests in Node.js, such as node-fetch
or another custom HTTP library, you can remove those dependencies now that the stable Fetch API is available natively:
npm uninstall node-fetch\\n\\n
Replace any code that currently utilizes an external library with the native fetch
implementation. Be careful to update the syntax and handle promises correctly.
Here’s what your code looked like before the update:
\\nAnd what it looks like after you update to use native Fetch:
\\nfetch(\\"https://api.example.com/data\\")\\n .then((response) => response.json())\\n .then((data) => console.log(data))\\n .catch((error) => console.error(\\"Error:\\", error));\\n\\n
As required, review and adjust any options or headers in your requests. The native Fetch API may differ in behavior or provide extra functionality, so consult the official documentation for any specific requirements.
\\nAfter you’ve made these changes, thoroughly test your application to check that the migration to the native Fetch API didn’t cause any problems. Take great care with error handling, timeouts, and any custom logic associated with HTTP requests.
\\nMany developers encounter errors when trying to use Fetch in Node.js. Let’s look at some of the most common issues and how to resolve them.
\\nA common issue is the ReferenceError: fetch is not defined
, which typically occurs when using Node.js versions older than v18. Fetch was introduced in Node.js v18 as an experimental feature and became stable in v21. To resolve this error, upgrade to Node.js v18 or later, as described in the previous section.
Furthermore, if you’re using an experimental version (v18–v20), you’ll need to run your app with the experimental flag:
\\nnode --experimental-fetch example.js\\n\\n
Additionally, for legacy projects where upgrading Node.js isn’t an option, you can use a library like node-fetch to add Fetch support to your project.
\\nAnother common error is TypeError: Failed to fetch
, which is usually triggered by network connectivity problems, invalid endpoints, or Cross-Origin Resource Sharing (CORS) restrictions i.e., when the server you’re making the request to doesn’t include the necessary CORS headers to allow requests from the client’s origin.
To resolve this, verify that the endpoint URL is valid. Also ensure the server is configured to allow CORS by adding the appropriate headers or using techniques like proxy middleware.
\\nSimilar to the above, TypeError: Cannot read property \'json\' of undefined
is another common error when using fetch
. This typically occurs when you attempt to parse a JSON response that is either malformed or not returned as expected by the server.
To resolve, confirm that the endpoint is returning a valid JSON structure. In some cases, the server might return a blob
, bytes
, or text
data instead, which should be accessed using their appropriate methods, as shown below:
fetch(\'https://api.example.com/data\')\\n .then(response => {\\n if (!response.ok) {\\n throw new Error(`Request failed with status: ${response.status}`);\\n }\\n return response.json(); // Use .blob(), .bytes(), or .text() if required\\n })\\n .then(data => console.log(data))\\n .catch(error => console.error(\'Fetch error:\', error));\\n\\n
Fetch in Node.js mimics browser-like behavior; however, unlike browsers, it doesn’t automatically handle cookies, i.e., cookies aren’t stored or sent unless you manage them yourself. For example, if an API relies on cookies for authentication, you’ll need to manually include them in your requests or use a library such as fetch-cookie or tough-cookie to manage them.
\\nFetch doesn’t natively support timeouts, so if a server takes too long to respond, your request might hang indefinitely. To avoid this, you can use AbortController
to set a timeout for your Fetch requests, as shown in the example below:
const controller = new AbortController();\\nconst timeout = setTimeout(() => controller.abort(), 5000); // Timeout after 5 seconds\\n\\nfetch(\\"http://example.com/api\\", { signal: controller.signal })\\n .then((response) => response.json())\\n .catch((err) => console.log(\\"Fetch error:\\", err));\\n\\n
This way, you have full control over how long to wait for a response before canceling the request.
\\nThe stabilization of the Node.js Fetch API, made possible by the tireless efforts of the Node.js core team and the key role played by the high-performance HTTP client library Undici, represents a big step forward for developers. The stable release also means you can now use it without the fear of bugs or unexpected issues, and you can enjoy a consistent experience when making HTTP requests in both browser and server environments.
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nSkeletonTheme
\\n Loading skeletons are placeholders that mimic the content being loaded on a UI for a more user-friendly experience. These placeholders minimize wait time frustration while ensuring a stable and visually smooth UI.
\\nIn this guide, we’ll explore practical examples and advanced techniques for building a loading state using the React Loading Skeleton package, as well as how to build a loading skeleton without relying on external dependencies.
\\nThe GIFs below illustrate the difference between a traditional loading spinner and a loading skeleton.
\\nThis example demonstrates the use of a loading spinner/loading text:
\\nThis example demonstrates the use of a loading skeleton:
\\nWe’ll take the starter project, which currently uses a loading spinner, and transform it to use a loading skeleton for a smoother loading experience.
\\nN.B., I’ve set the browser’s DevTools network to 3G for the demos in this lesson, as a faster network would make the loading skeleton less noticeable.
\\nFirst, let’s install the library:
\\nnpm install react-loading-skeleton\\n\\n
A basic implementation of this package will look like this:
\\n// ...\\nimport Skeleton from \'react-loading-skeleton\';\\nimport \'react-loading-skeleton/dist/skeleton.css\';\\nexport default function Post() {\\n const { data: post, error, isLoading } = useQuery({ /* Your query here */ });\\n return (\\n <article>\\n <h1 className=\\"text-xl md:text-2xl font-medium mb-6\\">\\n {isLoading ? <Skeleton /> : post.title}\\n </h1>\\n {/* Render other content here */}\\n </article>\\n );\\n}\\n\\n
In the example code, we imported the Skeleton
component from the React Loading Skeleton package, along with its CSS file for styling. While the data is being fetched (when isLoading
is true), a skeleton placeholder is shown for the title. Once the data is loaded, the actual post title is displayed.
Here, we used the Skeleton
component directly within the element being loaded. This way, the skeleton automatically adapts to the existing styles of the content:
React Loading Skeleton provides various props to customize the appearance, layout, and behavior of skeleton placeholders. Here’s an example that demonstrates how to customize the Skeleton
components:
<div className=\\"flex items-center gap-4 p-4\\">\\n {isLoading ? (\\n <Skeleton circle width={48} height={48} />\\n ) : (\\n <img\\n src={\'https://picsum.photos/seed/${user.id}/200\'}\\n alt={`${user?.name}\'s profile`}\\n className=\\"w-12 h-12 rounded-full object-cover\\"\\n />\\n )}\\n <div className=\\"flex-1\\">\\n <h3 className=\\"font-semibold text-gray-808\\">\\n {isLoading ? <Skeleton width={\'50%\'} /> : user.name}\\n </h3>\\n <p className=\\"text-gray-600 text-sm\\">\\n {isLoading ? (\\n <Skeleton\\n width={500}\\n baseColor=\\"#ffcccb\\"\\n highlightColor=\\"#add8e6\\"\\n duration={2}\\n />\\n ) : (\\n user.email\\n )}\\n </p>\\n </div>\\n</div>\\n\\n
This code uses the width
and height
props (with values in pixels or percentages) to define the size of the skeletons, the circle
prop to create circular placeholders, duration
to control the animation speed, baseColor
to set the skeleton’s default background color, and highlightColor
to define the highlight color of the animation.
The output visually simulates an avatar and user details while the data is loading:
\\nAs we continue, we’ll explore additional props for features such as multi-line rendering, custom styling, and animation control to further improve the loading experience.
\\nNow that we’ve covered the basics of using React Loading Skeleton, let’s revisit the starter project and transform it from using a loading spinner to using a loading skeleton.
\\nAs shown earlier, customizing individual Skeleton
components works well for simple use cases. However, in larger applications, manually maintaining consistent styling across multiple components can become challenging and inefficient. To simplify this, React Loading Skeleton offers the SkeletonTheme
component, which ensures consistent styling for all Skeleton
components within the React tree.
SkeletonTheme
To maintain a consistent design, we’ll wrap the top level of our application in a SkeletonTheme
. This allows us to define shared styles, such as baseColor
, highlightColor
, and duration
, for all skeleton components. This approach eliminates the need to specify these props individually for each component:
import { SkeletonTheme } from \'react-loading-skeleton\';\\nimport \'react-loading-skeleton/dist/skeleton.css\';\\n// ...\\ncreateRoot(document.getElementById(\'root\')!).render(\\n // ...\\n <SkeletonTheme\\n baseColor=\\"#d5d4d3\\"\\n highlightColor=\\"#f2f0ef\\"\\n duration={2}\\n >\\n <RouterProvider router={router} />\\n </SkeletonTheme>\\n);\\n\\n
In this top-level file, we’ve also imported the style file required for the skeleton components to render properly.
\\nTo implement a loading skeleton, let’s update the routes/post.tsx
file. Initially, it might look like this:
<div className=\\"max-w-4xl mx-auto\\">\\n <GoBack />\\n {isLoading && (\\n <div className=\\"text-xl font-medium\\">A moment please...</div>\\n )}\\n {error && (\\n <div className=\\"text-red-700\\">{`Error fetching post data: ${error}`}</div>\\n )}\\n <article>\\n <h1 className=\\"text-xl md:text-2xl font-medium mb-6\\">\\n {post?.title}\\n </h1>\\n <p>{post?.body}</p>\\n </article>\\n</div>\\n\\n
Now, by integrating the Skeleton
component, we can offer a smooth loading experience while the data is being fetched. Here’s the updated code:
import Skeleton from \'react-loading-skeleton\';\\nexport default function Post() {\\n // ...\\n return (\\n <div className=\\"max-w-4xl mx-auto\\">\\n <GoBack />\\n {error && (\\n <div className=\\"text-red-700\\">{`Error fetching post data: ${error}`}</div>\\n )}\\n <article>\\n <h1 className=\\"text-xl md:text-2xl font-medium mb-6\\">\\n {isLoading ? <Skeleton /> : post.title}\\n </h1>\\n <p>{isLoading ? <Skeleton count={2} /> : post.body}</p>\\n </article>\\n </div>\\n );\\n}\\n\\n
As expected, the Skeleton
component is rendered when the isLoading
state is set to true
. Additionally, we utilized the count
prop on the paragraph skeleton to generate multiple lines (two in this case), mimicking the appearance of a block of text. The GIF below showcases the result:
Let’s take a look at the code responsible for rendering the list of posts, or the post cards. Currently, we map through the user data to render individual UserCard
components. While the data is being fetched, a loading spinner from RiLoader2Fill
is displayed:
<ul className=\\"grid grid-cols-[repeat(auto-fit,minmax(250px,1fr))] gap-2 px-2\\">\\n {isLoading && (\\n <div className=\\"min-h-[300px] justify-items-center content-center\\">\\n <RiLoader2Fill className=\\"size-6 animate-spin \\" />\\n </div>\\n )}\\n {users?.map((user) => (\\n <UserCard\\n user={{\\n ...user,\\n imageUrl: `https://picsum.photos/seed/${user.id}/200`,\\n }}\\n key={user.id}\\n />\\n ))}\\n</ul>\\n\\n
To implement the loading skeleton instead, we will create a dedicated CardSkeleton
component that mimics the structure of the final card. Here’s how it looks:
import Skeleton from \'react-loading-skeleton\';\\nconst CardSkeleton = ({ cardItems }: { cardItems: number }) => {\\n const skeletonItems = Array(cardItems).fill(0);\\n return skeletonItems.map((_, index) => (\\n <li\\n className=\\"border-b border-gray-100 text-sm sm:text-base flex gap-4 items-center p-4\\"\\n key={index}\\n >\\n <Skeleton circle width={48} height={48} />\\n <Skeleton count={1.7} containerClassName=\\"flex-1\\" />\\n </li>\\n ));\\n};\\nexport default CardSkeleton;\\n\\n
The component accepts a cardItems
prop, which determines the number of skeleton items to display. Each card contains a circular
skeleton for the avatar and a text skeleton with the count
prop to generate multiple lines of text. Using a value like 1.7
creates one full-width skeleton with a shorter one below it. To allow the skeleton to grow within a flexible layout, containerClassName=\\"flex-1\\"
is used.
If necessary, you can achieve the same layout with this alternative:
\\n<div className=\\"flex-1\\">\\n <Skeleton count={1.7} />\\n</div>\\n\\n
Now, we can render the CardSkeleton
component during loading rather than the loading spinner:
import CardSkeleton from \'./CardSkeleton\';\\n// ...\\nconst CardList = () => {\\n // ...\\n return (\\n <ul className=\\"grid grid-cols-[repeat(auto-fit,minmax(250px,1fr))] gap-2 px-2\\">\\n {isLoading && <CardSkeleton cardItems={12} />}\\n {users?.map((user) => (\\n // ...\\n ))}\\n </ul>\\n );\\n};\\nexport default CardList;\\n\\n
Here is the result:
\\nWhen images or other resources are fetched from external sources, delays can occur, as shown in the GIF above.
\\nTo enhance the user experience, we’ll use skeleton loaders to display placeholder content until the image is fully loaded. This ensures that text appears first, while images or larger assets continue to show their loading skeletons, keeping the interface responsive and ensuring smooth transitions once all data is fully loaded.
\\n\\nThe following code tracks the loading state of the image. The handleImageLoad
function is triggered once the image finishes loading, setting isImageLoaded
to true
:
export const UserCard = ({ user }: UserCardProps) => {\\n const [isImageLoaded, setIsImageLoaded] = useState(false);\\n const handleImageLoad = () => {\\n setIsImageLoaded(true); // Set state to true once the image has loaded\\n };\\n // ...\\n return (\\n // ...\\n <div className=\\"w-12 h-12 relative\\">\\n {!isImageLoaded && <Skeleton circle width={48} height={48} />}\\n <img\\n src={user.imageUrl}\\n alt={`${user.name}\'s profile`}\\n className={`w-12 h-12 rounded-full object-cover\\n ${isImageLoaded ? \'opacity-100\' : \'opacity-0\'}`}\\n onLoad={handleImageLoad}\\n />\\n </div>\\n // ...\\n );\\n};\\n\\n
This addition improves the user experience by showing a loading skeleton until the image is fully loaded, offering a smoother visual transition:
\\nWe can customize the gradient of the highlight in the skeleton animation using the customHighlightBackground
prop. This prop can be applied individually to each Skeleton
, or globally through the SkeletonTheme
:
<SkeletonTheme\\n baseColor=\\"#d5d4d3\\"\\n highlightColor=\\"#f2f0ef\\"\\n duration={2}\\n customHighlightBackground=\\"linear-gradient(\\n 90deg,\\n var(--base-color) 30%,\\n #ffcccb 45%,\\n var(--highlight-color) 60%,\\n #add8e6 80%,\\n var(--base-color) 100%\\n )\\"\\n>\\n <RouterProvider router={router} />\\n</SkeletonTheme>\\n\\n
The skeleton will now use a custom gradient defined by customHighlightBackground
instead of the default highlight animation based on the provided baseColor
and highlightColor
:
When implementing light/dark themes in an application, it’s important to ensure the skeleton loader’s background color aligns with the active theme. Instead of hardcoding baseColor
and highlightColor
, we will dynamically apply colors based on whether the dark or light theme is active. This ensures the skeleton loader matches the overall theme of the application:
// ...\\nconst isDarkTheme = true;\\nconst darkThemeStyles = {\\n baseColor: \'#374151\',\\n highlightColor: \'#151c2b\',\\n};\\nconst lightThemeStyles = {\\n baseColor: \'#ebebeb\',\\n highlightColor: \'#f5f5f5\',\\n};\\ncreateRoot(document.getElementById(\'root\')!).render(\\n // ...\\n <SkeletonTheme\\n baseColor={\\n isDarkTheme\\n ? darkThemeStyles.baseColor\\n : lightThemeStyles.baseColor\\n }\\n highlightColor={\\n isDarkTheme\\n ? darkThemeStyles.highlightColor\\n : lightThemeStyles.highlightColor\\n }\\n duration={2}\\n >\\n <RouterProvider router={router} />\\n </SkeletonTheme>\\n // ...\\n);\\n\\n
With this update, the skeleton placeholder will now adapt to the chosen theme:
\\nWhile we’ve demonstrated how the React Loading Skeleton package simplifies the implementation of skeleton loaders, relying on third-party tools can introduce unnecessary dependencies to our project.
\\nWith Tailwind CSS, we can easily create a visually appealing and flexible skeleton loader using the following components:
\\nimport { cn } from \'../lib/utils\';\\nexport function CustomSkeleton({\\n className,\\n ...props\\n}: React.HTMLAttributes<HTMLDivElement>) {\\n return (\\n <div\\n className={cn(\\n \'animate-pulse rounded-md bg-[#d5d4d3]\',\\n className\\n )}\\n {...props}\\n />\\n );\\n}\\n\\n
The Tailwind animate-pulse
class adds a pulsing animation to indicate loading, while the className
prop lets us customize the shape and size of the skeleton.
Now, let’s update the CardSkeleton
component to use the CustomSkeleton
component:
import { CustomSkeleton } from \'./CustomSkeleton\';\\nconst CardSkeleton = ({ cardItems }: { cardItems: number }) => {\\n const skeletonItems = Array(cardItems).fill(0);\\n return skeletonItems.map((_, index) => (\\n <li\\n className=\\"border-b border-gray-100 text-sm sm:text-base flex gap-4 items-center p-4\\"\\n key={index}\\n >\\n <div className=\\"flex items-center space-x-4\\">\\n <CustomSkeleton className=\\"h-12 w-12 rounded-full\\" />\\n <div className=\\"space-y-2\\">\\n <CustomSkeleton className=\\"h-4 w-48\\" />\\n <CustomSkeleton className=\\"h-4 w-28\\" />\\n </div>\\n </div>\\n </li>\\n ));\\n};\\nexport default CardSkeleton;\\n\\n
The skeleton’s size and shape are customized using utility classes (h-12
, w-48
, rounded-full
, etc.). Here’s how the loading experience looks in action:
If you look closely at the GIF, you can see that the loading effect now includes a pulsing animation.
\\nTo enable a custom animation, we need to extend the Tailwind CSS configuration by adding keyframes and defining the animation in the configuration file, as shown below:
\\nkeyframes: {\\n shimmer: {\\n \'0%\': {\\n backgroundPosition: \'-200% 0\',\\n },\\n \'100%\': {\\n backgroundPosition: \'200% 0\',\\n },\\n },\\n},\\nanimation: {\\n shimmer: \'shimmer 2s linear infinite\',\\n},\\n\\n
This configuration defines the shimmer animation, which smoothly moves the gradient across the skeleton.
\\nNext, we’ll replace the pulsing effect with the shimmering gradient animation:
\\nimport { cn } from \'../lib/utils\';\\nexport function CustomSkeleton({\\n className,\\n ...props\\n}: React.HTMLAttributes<HTMLDivElement>) {\\n return (\\n <div\\n className={cn(\\n \'relative overflow-hidden rounded-md\',\\n \'before:absolute before:inset-0 before:animate-shimmer\',\\n \'before:bg-gradient-to-r before:from-[#d5d4d3] before:via-[#f2f0ef] before:to-[#d5d4d3] before:bg-[length:200%_100%]\',\\n className\\n )}\\n style={{ backgroundColor: \'#d5d4d3\' }}\\n {...props}\\n />\\n );\\n}\\n\\n
The loading effect now looks like this:
\\nThe shimmer animation smoothly transitions the gradient across the skeleton, creating a more polished loading effect.
\\n\\nLoading skeletons significantly enhance the user experience during asynchronous data fetching by improving visual stability and reducing perceived wait times. They help prevent layout shifts and ensure a smooth transition as content loads.
\\nIn this tutorial, we explored how to implement loading skeletons in React with and without external libraries like React Loading Skeleton. If you found this article helpful, feel free to share it. We’d also love to hear your thoughts or questions in the comments section. And don’t forget to check out the project source code for more information!
\\nFor further reading on skeleton loaders, check out these related articles:
\\n\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\ncontainer
utility\\n ring
width change\\n The word “tailwind” literally means the wind blowing in the same direction as a plane or boat’s course of movement. It helps an object travel faster and reach its destination quicker, ensuring speed and efficiency.
\\nTailwind CSS is a utility-first framework that lets you “rapidly build modern websites without leaving your HTML.” It’s not every developer’s cup of tea, but Tailwind CSS has gained significant popularity since its release in 2019.
\\nToday, you’ll likely find Tailwind CSS listed alongside established names like Bootstrap and Bulma when you search for “Top [insert number] CSS frameworks.”
\\nThis article will provide a preview and in-depth analysis of the next version, Tailwind v4.0. We’ll cover strategies for migrating existing projects and examples demonstrating the new features of Tailwind v4.0. We’ll also compare it with similar CSS frameworks, and explore the benefits and limitations of using this framework.
\\nTailwind v4.0 has been in development for several months, and the first public beta version was released in November 2024.
\\nFor more detailed information, you can visit the prerelease documentation, but this guide will highlight some of the many new and exciting features developers can look forward to in Tailwind CSS v4.0
\\nThe Tailwind team announced a new performance engine, Tailwind Oxide, in March 2024. Some benefits include a unified toolchain and simplified configuration to speed up the build process.
\\nWith the current Tailwind version, the tailwind.config.js
file allows you to override Tailwind’s default design tokens. It’s a customization hub where you can add custom utility classes and themes, disable plugins, and more.
Its most important function is defining the content sources for your project so Tailwind can scan for relevant utility class names and produce the right output.
\\nHere’s the default code for the tailwind.config.js
file when setting up a new project with Tailwind v3:
/** @type {import(\'tailwindcss\').Config} */\\nexport default {\\n content: [\\n \\"./index.html\\",\\n \\"./src/**/*.{js,ts,jsx,tsx}\\",\\n ],\\n theme: {\\n extend: {},\\n },\\n plugins: [],\\n}\\n\\n
After setting up the config file, the next step involved adding the Tailwind directives to the index.css
file.
These are the directives in Tailwind v3:
\\n@tailwind base;\\n@tailwind components;\\n@tailwind utilities;\\n\\n
In Tailwind v4, you don’t need a tailwind.config.js
file and @tailwind
directives. You’ll only need to import \\"tailwindcss\\"
into your main CSS file, and you’re good to go:
@import \\"tailwindcss\\";\\n\\n
This reduces the number of steps when setting up a new project and the number of files.
\\nYou can still use a JS config file, for example, if you already have an existing v3 project, by using the new @config
directive to load it in your CSS file:
@import \\"tailwindcss\\";\\n\\n@config \\"../../tailwind.config.js\\";\\n\\n
However, not every feature, like corePlugins
, important
, and separator
, is likely to be supported in the full v4.0 release. Some options, like safelist
may return with changes in behavior.
If there are files you don’t want to include, you can use the source()
function when importing Tailwind to limit automatic detection:
@import \\"tailwindcss\\" source(\\"../src\\");\\n\\n
For additional sources that Tailwind doesn’t detect by default, like anything in your .gitignore
file, you can add them using the @source
directive:
@import \\"tailwindcss\\";\\n@source \\"../node_modules/@my-company/ui-lib/src/components\\";\\n\\n
You can also disable source detection entirely:
\\n@import \\"tailwindcss\\" source(none);\\n\\n
You can import the specific individual layers you need for your project and disable Tailwind’s base styles:
\\n@layer theme, base, components, utilities;\\n@import \\"tailwindcss/theme\\" layer(theme);\\n@import \\"tailwindcss/utilities\\" layer(utilities);\\n\\n
The new CSS-first approach makes adding custom styling to your Tailwind project easier. Any customization will be added directly to the main CSS file instead of a JavaScript configuration file.
\\nIf, for instance, you want to configure new colors for a custom theme in Tailwind CSS v3, you’ll need to define the new utility classes in the theme
section of the tailwind.config.js
file.
Here’s how you’d do it in the JavaScript configuration file:
\\n/** @type {import(\'tailwindcss\').Config} */\\nexport default {\\n content: [\\n \\"./index.html\\",\\n \\"./src/**/*.{js,ts,jsx,tsx}\\",\\n ],\\n theme: {\\n extend: {\\n colors: {\\n background:\'#764abc\',\\n lilac: \'#eabad2\',\\n light: \'#eae3f5\'\\n }\\n },\\n },\\n plugins: [],\\n}\\n\\n
Here’s how you would add the classes to your HTML file:
\\n<div className=\\"bg-background\\">\\n <header className=\\"flex justify-between py-4 px-8\\">\\n <a href=\\"/\\" className=\\"text-light\\">LogRocket - Oscar</a>\\n\\n <ul className=\\"text-lilac\\">\\n <li><a href=\\"#\\">Home</a></li>\\n <li><a href=\\"#\\">About</a></li>\\n <li><a href=\\"#\\">Contact</a></li>\\n </ul>\\n </header>\\n\\n
In this example, the utility classes are bg-background
, text-light
, and text-lilac
.
In Tailwind CSS v4.0, you configure all your customizations in CSS with the new @theme
directive:
@import \\"tailwindcss\\";\\n\\n@theme {\\n --color-background-100: #764abc;\\n --color-lilac-100: #eabad2;\\n --color-light-100: #eae3f5;\\n}\\n\\n
The utility classes are then added to the HTML. You can choose to have different shades of the same color like the default Tailwind colors:
\\n<div className=\\"bg-background-100\\">\\n <header className=\\"flex justify-between py-4 px-8\\">\\n <a href=\\"/\\" className=\\"text-light-100\\">LogRocket - Oscar</a>\\n\\n <ul className=\\"text-lilac-100\\">\\n <li><a href=\\"#\\">Home</a></li>\\n <li><a href=\\"#\\">About</a></li>\\n <li><a href=\\"#\\">Contact</a></li>\\n </ul>\\n </header>\\n\\n
If you’re testing it out with VS Code, the @import
directive may be highlighted as an error but don’t worry, it’ll work just fine.
Note that the examples above were created with Tailwind CSS and React, hence why we have className
in the HTML and not class
. The utilities remain the same no matter the framework you’re working with.
From the previous example, you can see that CSS variables drive all theme styling in Tailwind v4.0:
\\n@theme {\\n --font-display: \\"Poppins\\", \\"sans-serif\\";\\n\\n --ease-fluid: cubic-bezier(0.3,0,0,1);\\n\\n --color-background-100: #764abc;\\n}\\n\\n
In v4.0, you can override a specific theme namespace — that is, the default utilities for colors, fonts, text, and more, or the entire Tailwind theme and configure your own. You can easily configure custom styling for essentially every Tailwind utility in the main CSS file:
\\nTo override the entire default theme, use --*: initial
. If you wanted to override the default Tailwind font and define your own, you’d use --font-*: initial
followed by your custom styling:
@import \\"tailwindcss\\";\\n\\n@theme {\\n --font-*: initial\\n --font-display: \\"Poppins\\", \\"sans-serif\\";\\n}\\n\\n
In this case, font-display
will be the only font-family
utility available in your project.
You can set default styling for a custom property using double-dashes. Here’s a page with the default Tailwind CSS font and text styling:
\\nHere’s the HTML markup for this page:
\\n<div className=\\"bg-background h-screen\\">\\n <header className=\\"flex justify-between py-4 px-8\\">\\n <a href=\\"/\\" className=\\"text-lg text-light font-bold\\">LogRocket - Oscar</a>\\n\\n <ul className=\\"hidden md:flex flex- items-center align-middle gap-4 font-bold text-lilac\\">\\n <li>\\n <a href=\\"#\\" className=\\"py-2 px-4 rounded-md\\">Home</a>\\n </li>\\n <li><a href=\\"#\\" className=\\"\\">About</a></li>\\n <li><a href=\\"#\\" className=\\"\\">Contact</a></li>\\n </ul>\\n </header>\\n <div className=\\"container px-32 py-32\\">\\n <div className=\\"flex\\">\\n <div>\\n <h1 className=\\"text-5xl text-lilac font-bold\\">Tailwind CSS</h1>\\n <br />\\n <h3 className=\\"text-3xl text-light font-semibold\\">\\n Build websites with utility classes from the comfort of your HTML\\n </h3>\\n <br />\\n <p className=\\"text-2xl text-light\\">\\n Lorem ipsum dolor sit amet consectetur adipisicing elit. Fu gi at veniet atque unde laudantium. Ipsa nam quisquam quod non fficiis porro? Lorem ipsum dolor, sit amet consectetur adipisicing elit. Eos iure nemo a hic sunt incidunt?\\n </p>\\n </div>\\n </div>\\n </div>\\n </div>\\n\\n
We’re using the custom colors from the earlier example, and configuring new font and text styling:
\\n@import \\"tailwindcss\\";\\n@import url(\'https://fonts.googleapis.com/css2?family=Poppins:ital,wght@0,100;0,200;0,300;0,400;0,500;0,600;0,700;0,800;0,900;1,100;1,200;1,300;1,400;1,500;1,600;1,700;1,800;1,900&family=Roboto:ital,wght@0,100;0,300;0,400;0,500;0,700;0,900;1,100;1,300;1,400;1,500;1,700;1,900&display=swap\');\\n\\n@theme { \\n --font-display : \\"Poppins\\", sans-serif; \\n --font-logo: \\"Roboto\\", sans-serif;\\n\\n --text-logo: 1.5rem;\\n --text-logo--font-weight: 700;\\n\\n --text-big: 6rem;\\n --text-big--font-weight: 700;\\n --text-big--letter-spacing: -0.025em;\\n\\n --color-background-100: #764abc;\\n --color-lilac-100: #eabad2;\\n --color-light-100: #eae3f5;\\n}\\n\\n
In this example, we’re importing two fonts and saving them under the --font-display
and --font-logo
variables, to be used for the logo and h1
header. We’re also configuring new text sizes and default styling for both.
So, when you add the utility class text-logo
in your HTML, the element will have a font size of 1.5rem
and font-weight
of 700
by default. Similarly, any element with the class name, text-big
, will have a font-size
of 6rem
, font-weight
of 700
, and letter-spacing
of -0.025em
by default.
Now we add the new utility classes into the HTML file:
\\n<div className=\\"bg-background-100 h-screen\\">\\n <header className=\\"flex justify-between py-4 px-8\\">\\n <a href=\\"/\\" className=\\"font-logo text-logo text-light-100\\">LogRocket - Oscar</a>\\n\\n <ul className=\\"hidden md:flex flex items-center align-middle gap-4 font-display text-lilac-100\\">\\n <li>\\n <a href=\\"#\\" className=\\"py-2 px-4 rounded-md\\">Home</a>\\n </li>\\n <li><a href=\\"#\\" className=\\"\\">About</a></li>\\n <li><a href=\\"#\\" className=\\"\\">Contact</a></li>\\n </ul>\\n </header>\\n <div className=\\"container px-32 py-32 font-display\\">\\n <div className=\\"flex\\">\\n <div>\\n <h1 className=\\"text-lilac-100 text-big\\">Tailwind CSS</h1>\\n <br />\\n <h3 className=\\"text-3xl text-light-100\\">\\n Build websites with utility classes from the comfort of your HTML\\n </h3>\\n <br />\\n <p className=\\"text-2xl text-light\\">\\n Lorem ipsum dolor sit amet consectetur adipisicing elit. Fu gi at veniet atque unde laudantium. Ipsa nam quisquam quod non fficiis porro? Lorem ipsum dolor, sit amet consectetur adipisicing elit. Eos iure nemo a hic sunt incidunt?\\n </p>\\n </div>\\n </div>\\n </div>\\n </div>\\n\\n
Here’s a screenshot of the page with the custom styling:
\\nIn Tailwind v4.0, there will be less dependency on the default Tailwind values as multiple classes can be replaced with one custom utility. In our example, the text-big
class name replaces the text-5xl
and text-bold
utility classes for the h1
header.
Again, this isn’t limited to specific namespaces — you can do this with every utility.
\\nSome utilities are no longer based on your theme configuration in Tailwind v4.0. You’ll be able to specify exactly what you want directly in your HTML file without extra configuration.
\\nIn Tailwind v3, you’d need to define the number of columns in your tailwind.config.js
file, but in Tailwind v4.0 you can use any number from as small as grid-cols-5
to as large as grid-cols-73
. It also applies to the z-index utilities (for example, z-40
) and opacity-*
.
Tailwind v4.0 also has built-in support for variants like data-*
. You can use them without arbitrary values.
The main benefit of these changes is that developers will be able to spend less time configuring non-essential, or non-core, design tokens.
\\nSpacing utilities, like m-*
, w-*
, mt-*
, px-*
, and more, are generated dynamically using a base spacing value of 0.25rem
defined in the default Tailwind v4.0 theme.
Every multiple of the base spacing value is available in the spacing scale. So if mt-1
is 0.25rem
, mt-2
will be 0.25rem
multiplied by two, which is 0.5rem
, and mt-21
will be 5.25rem
:
You can use spacing utilities with values that aren’t explicitly defined. In Tailwind v3, you’d need to use an arbitrary value like mt-[5.25rem]
or a custom theme. There’s no need for additional configuration and you can create more consistent designs.
If you want to limit the available spacing values, you can disable the default variable and define a custom scale:
\\n@theme {\\n --spacing: initial\\n --spacing-1: 0.25rem\\n --spacing-2: 0.5rem\\n --spacing-4: 1rem\\n --spacing-8: 2rem\\n --spacing-12: 3rem\\n}\\n\\n
With this setup, every Tailwind spacing utility will only use the specifically defined values.
\\nTailwind v4 is moving from the default rgb
color palette to oklch
, which enables more vibrant colors, and is less limited than rgb
:
Container queries now have built-in support in Tailwind CSS v4.0; you won’t need the @tailwindcss/container-queries
plugin to create responsive containers.
Container queries are used to apply styling to an element based on the size of its parent container. This means your site’s layout adapts to individual components rather than the entire viewport.
\\nIn v4.0, you create container queries by adding the @container
utility to a parent element. For the child elements, you use responsive utilities like @sm
and @lg
to apply styling based on the parent’s size:
<div className=\\"@container\\">\\n <header className=\\"flex justify-between @sm:grid-cols-2 @lg:grid-cols-4\\">\\n <!-- child content --\x3e\\n </header>\\n</div>\\n\\n
Tailwind v4.0 also introduces a new @max-*
variant for max-width container queries. It makes it easier to add styling when the container goes below a certain size. You can combine @min-*
and @max-*
to define container query ranges:
<div className=\\"@container\\">\\n <div className=\\"flex @min-md:@max-xl:hidden\\">\\n <!-- child content --\x3e\\n </div>\\n</div>\\n\\n
In this code, the child div
will be hidden when the width of the parent container is between md
and xl
(768px
and 1280px
).
Use cases for container queries include navigation, sidebars, cards, image galleries, and responsive text. They also provide more flexibility and are well-supported across browsers, so you can start using them in your v4.0 projects.
\\nIf you want to upgrade a v3 project to v4, Tailwind has provided an upgrade tool to do most of the work for you.
\\nTo upgrade your project, run the following command:
\\nnpx @tailwindcss/upgrade@next\\n\\n
The upgrade tool will automate several tasks like updating dependencies, migrating your JS config file to CSS, and handling changes in your template files.
\\nTailwind recommends using a new branch for the upgrade, to keep your main branch intact, and carefully reviewing the diff. Running a git diff
command helps you see and understand the changes in your project. You’d also want to test your project in a browser to confirm everything is working as it should.
Complex projects might require you to make manual adjustments, and Tailwind has outlined key changes and how to adapt to them, which we’ll cover below.
\\nPostCSS plugin: In v4.0, the PostCSS plugin is now available as a dedicated package, @tailwindcss/postcss
. You can remove postcss-import
and auto-prefixer
from the postcss.config.mjs
file in your existing project:
export default {\\n plugins: {\\n \'@tailwindcss/postcss\': {},\\n },\\n};\\n\\n
If you are starting a new project, you can now install Tailwind alongside the PostCSS plugin by running the following command:
\\nnpm install tailwindcss@next @tailwindcss/postcss@next\\n\\n
Vite plugin: Tailwind CSS v4.0 also has a new dedicated Vite plugin, which they recommend you migrate to from the PostCSS plugin:
\\nimport { defineConfig } from \'vite\';\\nimport tailwindcss from \'@tailwindcss/vite\';\\n\\nexport default defineConfig({\\n plugins: [\\n tailwindcss()\\n ],\\n});\\n\\n
As we’ve seen with PostCSS, you can install v4.0 along with the Vite plugin when setting up a new project:
\\nnpm install tailwindcss@next @tailwindcss/vite@next \\n\\n
Tailwind CLI: Using the CLI tool is the easiest and fastest way to set up Tailwind CSS, and it now resides in a dedicated @tailwind/cli
package.
You’d need to update your build commands accordingly:
\\nnpx @tailwindcss/cli -i input.css -o output.css\\n\\n
Several outdated or undocumented utilities have been removed and replaced with modern alternatives:
\\ncontainer
utilityIn v4.0, you configure the container
utility with @utility
:
@import \\"tailwindcss\\";\\n\\n@utility container {\\n margin-inline: auto;\\n padding-inline: 2rem;\\n}\\n\\n
Configuration options like center
and padding
don’t exist in v4.0.
Default scale adjustments have been made to every shadow, blur, and border-radius utility, to make sure they have a named value:
\\nYou’d need to replace each utility in your project to ensure things don’t look different.
\\nIn v3, the default border color is gray-200
. You didn’t need to explicitly set a color when using the border
utility:
<header className=\\"flex justify-between border-b-2 py-4 px-8\\">\\n <--! content --\x3e \\n </header>\\n\\n
In Tailwind CSS v4, the border color is updated to currentColor
, and your current project may experience a visual change if you don’t specify a color anywhere you use the border
utility.
Here’s the default border color in v4.0:
\\nTo maintain the v3 default behavior, you can add these CSS lines to your project:
\\nthe v3 behavior:\\n@import \\"tailwindcss\\";\\n\\n@layer base {\\n *,\\n ::after,\\n ::before,\\n ::backdrop,\\n ::file-selector-button {\\n border-color: var(--color-gray-200, currentColor);\\n }\\n}\\n\\n
ring
width changeThe ring
utility adds a 3px ring in v3, but it defaults to 1px in v4. Replace any usage of the ring
utility with ring-3
when updating your project to maintain its appearance.
In v4, placeholder text will use the current text color at 50% opacity by default. It uses the gray-400
color in v3, and if you want to preserve this behavior, add this to your CSS:
@layer base {\\n input::placeholder,\\n textarea::placeholder {\\n color: theme(--color-gray-400);\\n }\\n}\\n\\n
Also in v4, the outline-none
utility doesn’t add a transparent 2px
outline like it does in v3. There’s a new outline-hidden
utility in v4 that behaves like outline-none
from v3.
When upgrading your project, you’d need to replace outline-none
with outline-hidden
to maintain its current state, except you want to remove the outline entirely.
Custom utilities now work with the new @utility
API instead of @layer
utility. This change ensures compatibility with native cascade layers.
tThey are now just single-class names and no longer complex selectors:
\\n@utility tab-4 {\\n tab-size: 4;\\n}\\n\\n
Tailwind v4.0 stacks variants like first
and last
from left to right, so you will need to order the variants in your project.
The syntax for variables in arbitrary values has changed from square brackets to parenthesis to avoid ambiguity with new CSS standards. You’d need to update this in your project:
\\n<div class=\\"bg-(--brand-color)\\">\\n <!-- ... --\x3e\\n</div>\\n\\n
In v4, hover styles will only work on devices that support hover interactions to align with accessibility practices.
\\nYou can enable backward compatibility by defining a hover
variant in the CSS file:
@import \\"tailwindcss\\";\\n\\n@variant hover (&:hover);\\n\\n
theme()
functionTailwind CSS v4.0 generates variables for all theme values so the theme()
function is not necessary. Tailwind recommends that all theme()
functions in your project be replaced with CSS variables wherever possible:
@import \\"tailwindcss\\";\\n\\n.my-class {\\n background-color: var(--color-red-500);\\n}\\n\\n
For more details about the changes coming in Tailwind v4.0, you should visit the prerelease documentation.
\\nThe most obvious alternative to Tailwind CSS is Bootstrap, the most popular CSS framework in the world. It has an extensive library of predefined components.
\\nBootstrap is perhaps more beginner-friendly than Tailwind CSS. You can create ready-to-use components using specific and straightforward class names. Tailwind requires you to understand the utilities and their underlying CSS rules.
\\nAnother advantage Bootstrap has over Tailwind CSS is that it includes JavaScript by default, so you can do more backend stuff. Tailwind CSS has to be combined with JS frameworks.
\\nHowever, Bootstrap is not as customizable or as flexible as Tailwind CSS. A long-standing argument is that all Bootstrap sites look the same. With its utility-first approach, Tailwind offers more flexibility and control.
\\nMore utility-first CSS frameworks have popped up in recent years, like missing.css and Mojo CSS. None have been able to take the crown from Tailwind, but that’s not to say it’s not without its fair share of limitations:
\\nSteep learning curve: As earlier mentioned, the utility-first approach and large number of classes can be difficult for beginners to learn.
\\nCode readability: Because you’re working mainly in your HTML file, the code can become hard to read as each element accumulates utilities.
\\nInconsistent design: The flexibility of Tailwind CSS can lead to inconsistent designs across a project if you’re not mindful.
\\nSwitching frameworks: Projects can become tightly coupled with Tailwind CSS, making it difficult to switch to another framework.
\\nUpgrading your existing projects to the new version of Tailwind may seem like a difficult task, and this is true if you have a complex project, but the benefits are worthwhile. Tailwind is making everything faster and simpler by removing additional tools and files and providing clearer syntax.
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nIn this era of big data, effectively visualizing and interpreting data is crucial for making informed decisions and gaining actionable insights. Visualizing data through diagrams not only helps in interpreting complex datasets but also in communicating these insights effectively to a wider audience.
\\nChartDB is a powerful tool designed to simplify and enhance the process of visualizing complex databases. In this article, we’ll explore how to get started with ChartDB, practice creating your first database diagram, and provide you with practical tips to elevate your data storytelling skills.
\\nChartDB is an open-source, web-based database diagramming editor that lets you visualize and manage database schemas through interactive diagrams.
\\nWhile many tools in the ecosystem boast similar features, ChartDB stands out with its ease of use and features that make database visualization effortless. One such feature is instant schema visualization using a single query, Smart Query, which lets you import schemas directly as JSON files, thus making it incredibly fast to visualize your database. This can be useful for documentation, collaboration, or understanding database structures.
\\nOther key features of ChartDB include:
\\nChartDB supports popular databases such as PostgreSQL, MySQL, SQL Server, MariaDB, and ClickHouse.
\\nAnother standout feature of ChartDB is its flexibility in usage options. You can either use the hosted web app on the official website or self-host locally using Docker or Node.js.
\\nThe web app is the quickest way to get started with ChartDB. Simply sign in with a GitHub or Google account, and you’re ready to go. You can skip to the next section if you would prefer to use this method.
\\nTo install ChartDB locally, ensure Node is installed on your machine. Optionally, you can use Docker if preferred. Once these requirements are met, clone the repository with the following command:
\\ngit clone https://github.com/chartdb/chartdb.git\\n\\n
The repository is a few megabytes in size, so cloning might take a minute or two on slower networks. But once it’s completed, navigate to the chartDB
folder and install the necessary dependencies:
cd chartDB\\nnpm install\\n\\n
After the installation, start the development server with this command:
\\nnpm run dev\\n\\n
Once the server is running, open your browser and go to localhost:5173
to access ChartDB.
\\nFor production builds, use this command:
npm run build\\n
ChartDB allows you to add AI capabilities to your locally deployed fork. To enable this feature, you’ll need a valid OpenAI key. If you have one, build the application with the following command instead:
\\nVITE_OPENAI_API_KEY=<YOUR_OPEN_AI_KEY> npm run build\\n\\n
Note: replace <YOUR_OPEN_AI_KEY>
with your actual OpenAI key.
For those who prefer Docker, you can build and run ChartDB using the following commands:
\\ndocker build -t chartdb.\\ndocker run -e OPENAI_API_KEY=YOUR_OPEN_AI_KEY -p 8080:80 chartdb\\n\\n
Again, replace <YOUR_OPEN_AI_KEY>
with your OpenAI key. Once the build process is complete, access ChartDB by navigating to localhost:8080
in your browser.
If everything is set up correctly, you’ll see the following screen when you start the local server:
\\nIn the next section, we’ll look at how to create your first database diagram.
\\nOn your first visit to the ChartDB web app, you’ll get a modal prompting you to select your database type from a list of supported options, as shown in the image in the previous section. After clicking on a database icon, you’ll be taken to a screen similar to this:
\\nOn this screen, you are provided with a “magic query“ script that you can run in your database to retrieve the schemas as JSON. Once you have the resulting JSON, you are expected to copy and paste it into the empty field. ChartDB will use this input to generate a visual representation of your database.
\\nHowever, before copying the script, you must select your database edition. For this tutorial, we’ll use a PostgreSQL database, which offers three editions: Regular
, Supabase
, and Timescale
. We’ll proceed with the Regular
edition, but you should choose the one that matches your database type.
Next, decide how you want to run the script. You can use a database client interface like pgAdmin or the Postgres command-line tool. We’ll use pgAdmin for simplicity.
\\nIf you don’t have a Postgres database set up yet, you can quickly set one up by downloading and installing Postgres from the official website. It’s often quicker to just install the package, set up a database, and use pgAdmin to query the database.
\\nHowever, PostgreSQL on Windows can sometimes be prone to errors. If you encounter issues with the standard installation process, consider setting up Postgres using Docker instead.
\\nOnce PostgreSQL is set up, connect it to pgAdmin using a hostname and port. Since PostgreSQL is running locally, the host will typically be your IP address and port, 5432
.
After successfully connecting to pgAdmin, right-click on the Databases menu, then select Create >Database from the context menu:
\\nIn the modal that appears, enter the database name in the Database field and click the Save button to create the database:
\\nOnce the database is created, it will appear in the Databases dropdown. Locate the newly created database (in this case, ecommerce), right-click on it, and select the Query Tool option from the context menu:
\\nThis will open a new tab with a Query
field where you can input and run scripts using the Play icon or press F5
to query the database:
Follow these steps to add tables and sample data to your newly created database:
\\n1. Create tables:
\\nCREATE TABLE users (\\nuserid SERIAL PRIMARY KEY,\\nusername VARCHAR(50) NOT NULL UNIQUE,\\nemail VARCHAR(100) NOT NULL UNIQUE,\\npasswordhash VARCHAR(255) NOT NULL,\\ncreatedat TIMESTAMP DEFAULT CURRENTTIMESTAMP\\n);CREATE TABLE products (\\nproductid SERIAL PRIMARY KEY,\\nname VARCHAR(100) NOT NULL,\\ndescription TEXT,\\nprice DECIMAL(10, 2) NOT NULL,\\nstockquantity INT DEFAULT 0,\\ncreatedat TIMESTAMP DEFAULT CURRENTTIMESTAMP\\n);CREATE TABLE orders (\\norderid SERIAL PRIMARY KEY,\\nuserid INT REFERENCES users(userid),\\norderdate TIMESTAMP DEFAULT CURRENT_TIMESTAMP,\\nstatus VARCHAR(20) DEFAULT \'pending\'\\n);CREATE TABLE orderitems (\\norderitemid SERIAL PRIMARY KEY,\\norderid INT REFERENCES orders(orderid),\\nproductid INT REFERENCES products(product_id),\\nquantity INT NOT NULL,\\nprice DECIMAL(10, 2) NOT NULL\\n);\\n
2. Create indexes:
\\nCREATE INDEX idxusersusername ON users(username);\\nCREATE INDEX idxproductsname ON products(name);\\nCREATE INDEX idxordersuserid ON orders(userid);\\nCREATE INDEX idxorderitemsorderid ON orderitems(orderid);\\n
3. Add sample data:
\\nINSERT INTO users (username, email, passwordhash) VALUES\\n(\'johndoe\', \'[email protected]\', \'hashedpassword1\'),\\n(\'janesmith\', \'[email protected]\', \'hashed_password2\');INSERT INTO products (name, description, price, stock_quantity) VALUES\\n(\'Laptop\', \'High-performance laptop\', 1200.00, 10),\\n(\'Smartphone\', \'Latest model smartphone\', 800.00, 25);INSERT INTO orders (user_id, status) VALUES\\n(1, \'completed\'),\\n(2, \'pending\');INSERT INTO orderitems (orderid, product_id, quantity, price) VALUES\\n(1, 1, 1, 1200.00),\\n(2, 2, 2, 1600.00);\\n
These scripts will create a database for a fictional e-commerce platform that will include a table for users, products, orders, and order items, along with some relationships between the tables.
\\n\\nAfter successfully creating the database and populating it with tables and data, clear the Query field, copy and paste the magic query from ChartDB, and then run it. If everything is set up correctly, the Data Output tab below the Query field will display a result similar to the example shown below:
\\nHere’s the schema for the database we just created in JSON format. To copy it, click on the JSON output to highlight the copy icon, then click the icon to copy the output to your clipboard:
\\nNext, return to the ChartDB web app, paste the JSON output into the empty field in the ChartDB modal, and click the Check Script Result button to validate the script’s output:
\\nOnce the JSON schema is validated, click Import, and ChartDB will generate a diagram from it:
\\nCongratulations! You’ve successfully created your first diagram in ChartDB. Next, we’ll explore the editor and examine how its components work together.
\\nNow that you have a database visualized, let’s look at how to use the editor to enhance your visualization and explore the additional features ChartDB offers.
\\nAt a glance, the editor is divided into two intuitive sections:
\\nThis section contains a list of all the tables available in your database schema. You can add new tables, modify existing ones, and establish or edit relationships between tables directly in the editor:
\\nThe tables are displayed in an expandable tree view which allows you to drill down into each table to view, edit, or add columns and indexes. You can also add annotations, which is particularly useful for collaboration and documentation purposes:
\\nThe search bar at the top of the panel dynamically filters through the list of tables to find specific tables or columns quickly:
\\nThis is where the magic happens. The main workspace is a grid board that displays your database tables as movable boxes, with columns and keys clearly listed inside each box:
\\nLines connecting the boxes represent relationships (foreign keys) between tables. They illustrate how the boxes interact and provide a clear visual of table relationships:
\\nThe boxes support interactive editing, meaning you can click the edit icon to expand the table’s tree view in the left panel, where you can add new properties or modify existing ones.
\\nRelationships can also be established directly on the board using connection points (or anchors) that appear when a box is clicked:
\\nThese connection points indicate where relationships can be created or already exist on the boxes. You can drag a connection point to another table’s connection point to establish a foreign key relationship. However, it’s important to note that connections must be made only between related fields.
\\nIn the example below, an error occurs when we attempt to connect two unrelated fields, but succeeds when we turn them into related fields:
\\n\\n
At the bottom of the board are the board controls. These controls allow you to zoom in and out of the schema view and pan around the workspace to focus on different parts of the database structure.
\\nChartDB’s AI-powered SQL export simplifies database migration across different systems. It automatically generates Data Definition Language (DDL) scripts tailored to specific database platforms. That is, the AI adjusts data types, constraints, and other schema elements to match the conventions and requirements of the target database.
\\nFor instance, it can convert AUTO_INCREMENT
in MySQL to SERIAL
in PostgreSQL or handle differences in primary key and index definitions.
To use this feature, click the File menu option on the editor’s navbar, select the target database from the Export SQL list, and generate a customizable DDL output:
\\nIn this example, we successfully migrated a PostgreSQL database to MariaDB without requiring deep expertise in both systems. This way, we can minimize errors and save time by eliminating the need for manual rewriting.
\\nNote that to use these features locally, you must set an environment variable with an OpenAI key when building the application. Without this key, the AI SQL export feature will not function.
\\nIf you don’t have an OpenAI key but want to access these features, consider using the web app, where the AI features work out of the box.
\\nChartDB makes sharing diagrams and embedding database schemas a breeze. You can save diagrams as image files (e.g., PNG or SVG) for slideshows or email attachments. Additionally, you can export them as JSON files for embedding diagrams into external web pages or sharing real-time ChartDB workflows with teams or clients.
\\nTo share a diagram, click on the File menu in the navbar. Below the Export SQL option in the dropdown menu, you’ll find the Export as option. Click it and select your preferred image file type to export the diagram:
\\nTo export a diagram as JSON, use the Share menu in the navbar and select Export Diagram:
\\nYou’ll also notice the Import Diagram option below the Export option. This allows you to import shared diagrams in JSON format from another ChartDB user or a validated JSON file compatible with ChartDB:
\\nChartDB is relatively new to the database diagramming ecosystem and may not yet be on par with tools like DBeaver and dbdiagram in feature depth and community scale. However, ChartDB offers a refreshing approach, with its straightforward database visualization and intuitive user interface.
\\nWhether you’re a data analyst, business professional, or developer, mastering ChartDB can help you transform raw data into clear and meaningful visuals.
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nNavbar
component\\n NavLink
component\\n Editor’s note: This article was reviewed and updated by Chinwike Maduabuchi in January 2025 to showcase how to use TailwindCSS to style the navbar, and to highlight active link styling on the NavLink
component. The source code and the live project from the previous version of the post show the use of an alternative CSS module for styling the navbar.
Styling responsive navigation menus for end users is hardly ever an easy process. Frontend developers have to consider certain parameters — like device breakpoints and accessibility — to create a pleasant navigation experience. It can get more challenging in frontend frameworks like React, where CSS-in-JS tends to get tricky.
\\nIn this tutorial, we will learn how to create a responsive navigation bar using React.js and CSS. Also, we will look at how we can apply media queries to make our navbar responsive to different screen sizes. The final result will look and behave as shown below:
\\n\\n
You can fiddle with the source code and view the live project here.
\\nTo follow along with this tutorial, you’ll need:
\\nNow, let’s set up a new React application!
\\nNow let’s start moving parts around, shall we? Here’s a summary of the tools we’ll be using to create this application:
\\nYou can now begin by running this sequence of commands in your terminal:
\\n#bash\\npnpm create vite@latest responsive-react-navbar\\n# follow prompts (select react & typescript+swc)\\ncd responsive-react-navbar\\npnpm install\\npnpm run dev\\n\\n
This will create a new React project with starter files for us to start our implementation. Next up, we will install the necessary dependencies discussed earlier:
\\npnpm add react-router-dom lucide-react @mantine/hooks\\n\\n
After that’s done, you can open the project in your preferred editor and we’ll proceed to install and configure Tailwind.
\\nNow, install the remaining dependencies:
\\npnpm add -D tailwindcss postcss autoprefixer tailwind-merge clsx \\n\\n
Now that we have Tailwind installed, run the following command to generate your Tailwind and PostCSS config files:
\\n## bash\\nnpx tailwindcss init -p\\n## convert tailwind config file to a typescript \\nmv tailwind.config.js tailwind.config.ts\\n\\n
Replace your Tailwind config content file with the code below to ensure all Tailwind classes apply to your React components. You will also notice global variables (CSS custom properties) being used to configure the font and colors of the application:
\\n// tailwind.config.ts\\nimport type { Config } from \'tailwindcss\'\\nconst config: Config = {\\n content: [\'./index.html\', \'./src/**/*.{js,ts,jsx,tsx}\'],\\n theme: {\\n extend: {\\n fontFamily: {\\n mono: [\'var(--montserrat-font)\'],\\n sans: [\'var(--inter-font)\'],\\n },\\n colors: {\\n primary: {\\n DEFAULT: \'var(--primary)\',\\n },\\n secondary: {\\n DEFAULT: \'var(--secondary)\',\\n },\\n },\\n },\\n },\\n plugins: [],\\n}\\nexport default config\\n>\\n
Modify your index.css
file to include the Tailwind directives and CSS custom properties we used earlier:
// src/index.css\\n@import url(\'https://fonts.googleapis.com/css2?family=Inter:ital,opsz,wght@0,14..32,100..900;1,14..32,100..900&display=swap\');\\n@import url(\'https://fonts.googleapis.com/css2?family=DM+Sans:ital,opsz,wght@0,9..40,100..1000;1,9..40,100..1000&family=Inter:ital,opsz,wght@0,14..32,100..900;1,14..32,100..900&display=swap\');\\n\\n@tailwind base;\\n@tailwind components;\\n@tailwind utilities;\\n:root {\\n --primary: #242424;\\n --secondary: rgba(255, 255, 255, 0.87);\\n --font-inter: \'Inter\', sans-serif;\\n --font-dmsans: \'DM Sans\', sans-serif;\\n --navbar-height: 80px;\\n font-family: var(--font-dmsans);\\n line-height: 1.5;\\n font-weight: 400;\\n color: var(--secondary);\\n background-color: var(--primary);\\n font-synthesis: none;\\n text-rendering: optimizeLegibility;\\n -webkit-font-smoothing: antialiased;\\n -moz-osx-font-smoothing: grayscale;\\n}\\n>\\n
Here, we’ve imported our fonts from Google Fonts and added the Tailwind directives to our global stylesheet. Now Tailwind is set up across our project. We can also create a utility function that will help us effectively apply conditional styles to any element. This function will make use of the tailwind-merge
and clsx
libraries installed earlier.
Run this to create the file from your terminal:
\\n#bash\\nmkdir src/lib && touch src/lilb/utils.ts\\n\\n
And fill with the content below:
\\n// src/lib/utils.ts\\nimport { clsx, type ClassValue } from \'clsx\'\\nimport { twMerge } from \'tailwind-merge\'\\n\\nexport function cn(...inputs: ClassValue[]) {\\n return twMerge(clsx(inputs))\\n}\\n\\n
You’ll see this in action when we start changing the navigation bar’s appearance based on some React states.
\\nTo begin recreating the app you saw in the demo, you will find the internal pages of the app in src/components/pages.tsx
. Once you have that copied over, you can start constructing the app routes with React Router Dom.
App.ts
is the entry point of our React application. Therefore, we can write the application’s routing structure here.
We use BrowserRouter
from React Router DOM to wrap the entire app and use the Routes
and Route
components to define the routes:
App.js\\nimport { BrowserRouter, Routes, Route } from \'react-router-dom\'\\nimport { About, Contact, Home, Pricing } from \'./components/pages\'\\nimport { Navbar } from \'./components/navbar\'\\n\\nfunction App() {\\n return (\\n <main>\\n <BrowserRouter>\\n <Navbar />\\n <div className=\'min-h-screen w-full flex items-center justify-center\'>\\n <Routes>\\n <Route path=\'/\' element={<Home />} />\\n <Route path=\'/about\' element={<About />} />\\n <Route path=\'/pricing\' element={<Pricing />} />\\n <Route path=\'/contact\' element={<Contact />} />\\n </Routes>\\n </div>\\n </BrowserRouter>\\n </main>\\n )\\n}\\nexport default App\\n\\n
Here, we are bootstrapping our app’s routing and navigation using React Router DOM. We also imported the Navbar
component and then defined the Routes
, which are simple components exported from src/components/pages.tsx
:
// src/comoponents/pages.tsx\\nexport const Home = () => {\\n return <div>Home Page</div>\\n}\\nexport const About = () => {\\n return <div>About Page</div>\\n}\\nexport const Contact = () => {\\n return <div>Contact Page</div>\\n}\\nexport const Pricing = () => {\\n return <div>Pricing Page</div>\\n}\\n\\n
Routes
is a container for a nested tree of <Route>
elements that each renders the branch that best matches the current location. <Route>
declares an element that should be rendered at a certain URL path.
Navbar
componentOur goal is to create a responsive navbar that initially presents the navigation menu in a horizontal layout for larger screens. As the viewport size reduces to mobile dimensions, the menu smoothly transitions to a sliding panel that enters from the right side. In this mobile view, the menu spans the full height of the screen and covers half its width, ensuring a clean and accessible design across devices.
\\n\\nIn advanced cases, however, I personally create two separate menu components — the main and the side menu to avoid complications. But we can proceed with this solution for this simple app.
\\nNow, let’s build the structure of the Navbar
component with Tailwind. First, let’s define the links and their TypeScript interface:
<// src/component/navbar.tsx\\ninterface NavLinkType {\\n name: string\\n path: string\\n}\\n\\nconst navLinks: NavLinkType[] = [\\n { name: \'Home\', path: \'/\' },\\n { name: \'About\', path: \'/about\' },\\n { name: \'Pricing\', path: \'/pricing\' },\\n { name: \'Contact\', path: \'/contact\' }\\n]\\n\\n
The navigation bar will consist of three main sections: 1.) a header container 2.) the main navigation wrapper (nav element), and 3.) the logo, navigation links and hamburger or menu button:
\\nimport { useState } from \'react\'\\nimport { NavLink } from \'react-router-dom\'\\nimport { MenuIcon, XIcon } from \'lucide-react\'\\n\\ninterface NavLinkType {\\n name: string\\n path: string\\n}\\n\\nconst navLinks: NavLinkType[] = [\\n // ...navLinks\\n]\\n\\nexport const Navbar = () => {\\n const [isMenuOpen, setIsMenuOpen] = useState(false)\\n return (\\n <header>\\n <nav>\\n {/* Logo */}\\n <NavLink to=\'/\' className=\'font-bold\'>\\n NavigationBar\\n </NavLink>\\n {/* Navigation Links Container */}\\n <ul>\\n {navLinks.map((link) => (\\n <li key={link.name}>\\n <NavLink to={link.path}>{link.name}</NavLink>\\n </li>\\n ))}\\n <a href=\'https://chinwike.space\'>Explore Further</a>\\n </ul>\\n {/* Mobile Menu Button */}\\n <button>{isMenuOpen ? <XIcon /> : <MenuIcon />}</button>\\n </nav>\\n </header>\\n )\\n}\\n\\n
Next, let’s apply some basic Tailwind classes to style the navigation bar.
\\nUpdate the header
and nav
tags with Tailwind utility classes:
<header className=\'fixed w-full px-8 shadow-sm shadow-neutral-500 h-[--navbar-height] flex items-center\'>\\n <nav className=\'flex justify-between items-center w-full\'>\\n <NavLink to=\'/\' className=\'font-bold\'>\\n NavigationBar\\n </NavLink>\\n <ul className=\'flex items-center gap-8\'>\\n {navLinks.map((link) => (\\n <li key={link.name}>\\n <NavLink to={link.path} className=\'text-secondary\'>\\n {link.name}\\n </NavLink>\\n </li>\\n ))}\\n <a\\n href=\'https://chinwike.space\'\\n className=\'rounded-lg py-2 px-4 bg-[#1FABEB]\'\\n >\\n Explore Further\\n </a>\\n </ul>\\n </nav>\\n</header>\\n\\n
The header
is fixed at the top with a shadow for separation. Its height is mapped to the --navbar-height
custom property defined earlier
The nav
is styled using Flexbox to distribute space between its child elements — currently the logo and links
We map through the array of links and render them using React Router’s NavLink
component, which has extra features we’ll discuss later
The result is a horizontal navbar with evenly spaced links. At this point, our navbar project’s desktop view should look like this:
\\nNow let’s move on to make this React navbar responsive in the next section.
\\nNow that we have defined the structure of the Navbar
component, we can start making it responsive using media queries.
Media queries are a CSS feature that lets you specify how your content layout will respond to different conditions — such as a change in viewport width. Tailwind’s media query classes follow a mobile-first approach, meaning styles apply to smaller screens by default and scale up as screen sizes increase. Tailwind uses breakpoint prefixes like sm:
, md:
, lg:
, and xl:
to define styles for larger viewports.
For example:
\\n<p class=\\"text-base sm:text-lg md:text-xl\\">Hello World</p>\\n\\n
Here, text-base
applies to all screens, sm:text-lg
kicks in at 640px
, and md:text-xl
applies from 768px
and up. This design ensures that your app is optimized for mobile users first, progressively enhancing for larger devices.
You can also target any breakpoint using Tailwind’s arbitrary syntax [600px]:text-lg
or by extending the breakpoints in the config file.
Let’s now create the hamburger button and apply media query classes to hide it on larger screens using the md:hidden
class. We’ll also render a MenuIcon
or an XIcon
depending on the isMenuOpen
state:
import { MenuIcon, XIcon } from \'lucide-react\'\\n\\n// navbar return body\\n<header className=\'fixed w-full px-8 shadow-sm shadow-neutral-500 h-[--navbar-height] flex items-center\'>\\n <nav className=\'flex justify-between items-center w-full\'>\\n <NavLink to=\'/\' className=\'font-bold\'>\\n NavigationBar\\n </NavLink>\\n <ul className=\'flex items-center gap-8\'>\\n {navLinks.map((link) => (\\n <li key={link.name}>\\n <NavLink to={link.path}>{link.name}</NavLink>\\n </li>\\n ))}\\n <a\\n href=\'https://chinwike.space\'\\n className=\'rounded-lg py-2 px-4 bg-[#1FABEB]\'\\n >\\n Explore Further\\n </a>\\n </ul>\\n <button aria-labelledby=\'Menu Toggle Button\' className=\'block md:hidden\'>\\n {isMenuOpen ? (\\n <XIcon className=\'size-6 text-secondary\' />\\n ) : (\\n <MenuIcon className=\'size-6 text-secondary\' />\\n )}\\n </button>\\n </nav>\\n</header>\\n\\n
Next, we’re going to attempt to reposition the menu (ul
element) to the right of the screen.
However, to do this accurately we need to be aware of two state values in our environment:
\\nWe already have an isMenuOpen
state in the component. We can track the width of the viewport using the useViewportSize
hook from the Mantine Hooks package installed earlier:
const [isMenuOpen, setIsMenuOpen] = useState(false)\\nconst { width } = useViewportSize()\\nconst isMobile = width < 768 // below md breakpoint\\n\\n
Now, let’s apply some responsive and conditional styling to the menu:
\\n<ul\\n className={cn(\\n \'flex items-center gap-8\',\\n isMenuOpen &&\\n \'bg-neutral-700 flex-col fixed top-[--navbar-height] right-0 bottom-0 w-1/2 p-8 transform transition-transform duration-300 ease-in-out translate-x-0\',\\n !isMenuOpen &&\\n isMobile &&\\n \'bg-neutral-700 flex-col fixed top-[--navbar-height] right-0 bottom-0 w-1/2 p-8 transform transition-transform duration-300 ease-in-out translate-x-full\'\\n )}\\n>\\n // ....\\n</ul>\\n\\n
Let’s break down what those responsive classes do:
\\nBase layout:
\\n- flex items-center gap-8
represents horizontal layout with spacing
Mobile menu open state:
\\n– flex-col
changes to vertical layout
\\n– fixed top-[--navbar-height] right-0 bottom-0
positions menu as overlay
\\n– w-1/2
takes half the screen width
\\n– translate-x-0
shows the menu
Mobile menu closed state, and the device is mobile:
\\n- Same positioning but with translate-x-full to hide off-screen
\\n
\\nAnimations:
\\n- transform transition-transform duration-300 ease-in-out
displays smooth sliding animation
We haven’t yet made this navbar interactive by handling click events on the menu icon. But here is what both the open and closed state of the menu would look like respectively:
\\nNavLink
componentThe NavLink
component from React Router DOM makes it easy to style active navigation links in a React application. It provides built-in properties like isActive
, which allows you to apply dynamic styles based on whether the link matches the current route.
By default, the NavLink
component automatically injects an .active
class into the rendered element when its corresponding route is active. This class can be directly targeted from your CSS file to style the active state.
NavLink
to style active linksBelow is an example of how we used the isActive
classes gotten from the className
callback function to apply different styles to active and inactive links:
navLinks.map((link) => (\\n <li key={link.name}>\\n <NavLink\\n to={link.path}\\n className={({ isActive }) =>\\n isActive ? \'text-sky-500\' : \'text-secondary\'\\n }\\n onClick={closeMenuOnMobile}\\n >\\n {link.name}\\n </NavLink>\\n </li>\\n))}\\n\\n
to
prop specifies the path to navigate to.
className
prop uses a callback function to dynamically apply styles based on the isActive
property.
isActive
is a boolean that tells whether the link matches the current route.
.active
classYou can also style the active state by targeting the .active
class in your CSS file. Since NavLink
automatically applies this class to active links, you don’t need to manually handle the isActive
property.
Here is a CSS example:
\\n/* Style for active links */\\nnav a.active {\\n color: #38bdf8; /* Sky blue color */\\n font-weight: bold;\\n}\\n\\n/* Style for inactive links */\\nnav a {\\n color: #737373; /* Neutral gray */\\n}\\n\\n
Let’s now make the menu hamburger icon interactive.
\\nWe’ll start by creating the functions needed to control the navbar’s state. These include a toggleMenu
function that toggles the boolean value of isMenuOpen
and a closeMenuOnMobile
function to close the menu overlay for users after a link has been clicked:
// Toggle menu open/closed\\nconst toggleMenu = () => {\\n setIsMenuOpen(!isMenuOpen)\\n}\\n\\n// Close menu when clicking a link on mobile\\nconst closeMenuOnMobile = () => {\\n if (isMobile) {\\n setIsMenuOpen(false)\\n }\\n}\\n\\n
Here is the updated Navbar.ts
file that manages the navigation bar’s appearance with the state and nav functions created earlier:
import { useState } from \'react\'\\nimport { NavLink } from \'react-router-dom\'\\nimport { MenuIcon, XIcon } from \'lucide-react\'\\nimport { useViewportSize } from \'@mantine/hooks\'\\nimport { cn } from \'../lib/utils\'\\ninterface NavLinkType {\\n name: string\\n path: string\\n}\\nconst navLinks: NavLinkType[] = [\\n // ..navlinks\\n]\\nexport const Navbar = () => {\\n const [isMenuOpen, setIsMenuOpen] = useState(false)\\n const { width } = useViewportSize()\\n const isMobile = width < 768 // below md breakpoint\\n const toggleMenu = () => {\\n setIsMenuOpen(!isMenuOpen)\\n }\\n const closeMenuOnMobile = () => {\\n if (isMobile) {\\n setIsMenuOpen(false)\\n }\\n }\\n return (\\n <header className=\'fixed w-full px-8 shadow-sm shadow-neutral-500 h-[--navbar-height] flex items-center\'>\\n <nav className=\'flex justify-between items-center w-full\'>\\n <NavLink to=\'/\' className=\'font-bold\'>\\n NavigationBar\\n </NavLink>\\n <ul\\n className={cn(\\n \'flex items-center gap-8\',\\n isMenuOpen &&\\n \'bg-neutral-700 flex-col fixed top-[--navbar-height] right-0 bottom-0 w-1/2 p-8 transform transition-transform duration-300 ease-in-out translate-x-0\',\\n !isMenuOpen &&\\n isMobile &&\\n \'bg-neutral-700 flex-col fixed top-[--navbar-height] right-0 bottom-0 w-1/2 p-8 transform transition-transform duration-300 ease-in-out translate-x-full\'\\n )}\\n >\\n {navLinks.map((link) => (\\n <li key={link.name}>\\n <NavLink\\n to={link.path}\\n className={({ isActive }) =>\\n isActive ? \'text-sky-500\' : \'text-secondary\'\\n }\\n onClick={closeMenuOnMobile}\\n >\\n {link.name}\\n </NavLink>\\n </li>\\n ))}\\n <a\\n href=\'https://chinwike.space\'\\n className=\'rounded-lg py-2 px-4 bg-[#1FABEB]\'\\n >\\n Explore Further\\n </a>\\n </ul>\\n <button\\n aria-labelledby=\'Menu Toggle Button\'\\n className=\'block md:hidden\'\\n onClick={toggleMenu}\\n >\\n {isMenuOpen ? (\\n <XIcon className=\'size-6 text-secondary\' />\\n ) : (\\n <MenuIcon className=\'size-6 text-secondary\' />\\n )}\\n </button>\\n </nav>\\n </header>\\n )\\n}\\n\\n
Let’s see what we have now on the UI:
\\nNavigation menus serve an important role in the overall experience of your web application. Making your navbar as organized and accessible as possible can help boost UX and even SEO performance. You can check out the source code and view the live project here. If you have any questions, feel free to comment below.
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nscrollsnapchanging
\\n scrollsnapchange
\\n Users of Chrome 129 and above can now access the new scroll snap events — scrollsnapchange
and scrollsnapchanging
. These new events give users unique and dynamic control of the CSS scroll snap feature.
scrollsnapchanging
This event is triggered during a scroll gesture when the browser identifies a new scroll snap target that will be selected when the scrolling ends. It is also referred to as the pending scroll snap target.
\\nThe scrollsnapchanging
event is triggered continuously as a user scrolls slowly across multiple potential snap targets on a page without lifting their finger. However, it does not fire if the user quickly flings through the page, passing over several snap targets in one scrolling gesture. Instead, the event is triggered only for the final target where snapping is likely to settle.
scrollsnapchange
This event is triggered only when a scroll gesture results in a new scroll snap target being settled on. It occurs immediately after the scroll has stopped and just before the scrollend
event fires.
Another unique behavior about this event is that it does not trigger during an ongoing scrolling gesture which means that the scroll has not ended and it’s also likely that the snap target has not yet changed.
\\nScroll snap events work in conjunction with CSS scroll snap properties. These events are typically assigned to a parent container that contains the scroll snap targets.
\\nBoth of these JavaScript scroll snap events share the SnapEvent
object, which includes two important properties that are used in these events:
snapTargetBlock
— Provides a reference to the element snapped in the block direction when the event is triggered. If snapping only occurs in the inline direction, it returns null
as no element is snapped to in the block directionsnapTargetInline
— Provides a reference to the element snapped in the inline direction when the event is triggered. If snapping only occurs in the block direction, it returns null
as no element is snapped to in the inline directionBy using these properties together with scroll snap events and event handler functions, you can easily identify the element that has been snapped to and customize it or apply predefined styles as needed. In this article, we will explore how to achieve this.
\\nLet’s take a look at a quick example of how to use these events:
\\nSee the Pen
\\nUsing scrollSnapChange and scrollSnapChanging Demo by coded_fae (@coded_fae)
\\non CodePen.
This example demonstrates a scrolling container with vertical scroll snapping applied to a group of grid boxes. These boxes, represented as div
elements, serve as the scroll snap targets.
Initially, all the boxes have a white background. During a scroll action, when a new snap target is pending, the background color changes to green. Once the snap target is selected, the background transitions smoothly to red with white text.
\\nHere’s how the events are implemented. The comments in the code provide a step-by-step breakdown of the process:
\\nFor scrollsnapchanging
:
scrollContainer.addEventListener(\\"scrollsnapchanging\\", (event) => {\\n// This adds an event listener to the scroll container for the \\"scrollsnapchanging\\" event \\n// This event fires during scrolling, predicting potential snap targets\\n const previousPending = document.querySelector(\\".snap-incoming\\");\\n // Find any existing element with the \\"snap-incoming\\" class\\n\\n if (previousPending) {\\n previousPending.classList.remove(\\"snap-incoming\\");\\n // If such an element exists, remove the \\"snap-incoming\\" class\\n // This ensures only one element has this active state at a time\\n }\\n event.snapTargetBlock.classList.add(\\"snap-incoming\\");\\n // Add the \\"snap-incoming\\" class to the new snap target\\n\\n updateEventLog(\\"Snap Changing\\");\\n // Update the event log to show that the snapchanging event has occured\\n});\\n\\n
For scrollsnapchange
:
scrollContainer.addEventListener(\\"scrollsnapchange\\", (event) => {\\n // This adds an event listener to the scroll container for the \\"scrollsnapchange\\" event\\n // The event fires when a scroll snap has been completed and a new target has been selected\\n\\n const currentlySnapped = document.querySelector(\\".snap-active\\");\\n // Finds any existing element with the \\"snap-active\\" class\\n\\n if (currentlySnapped) {\\n currentlySnapped.classList.remove(\\"snap-active\\");\\n // If such an element exists, remove the \\"snap-active\\" class\\n }\\n\\n event.snapTargetBlock.classList.add(\\"snap-active\\");\\n // Adds the \\"snap-active\\" class to the new snap target\\n\\n updateEventLog(\\"Snap Changed\\");\\n // Update the event log to show that the snapchange event has occured\\n});\\n\\n
These events offer unique use cases for scroll-triggered animations, as they allow precise control over animations during active scrolling (in progress) and upon reaching a snap target.
\\nLet’s take a look at some unique cases for these events
This use case features horizontally scrolled carousels implemented using CSS scroll snap properties, with an additional feature that animates the contents of each slide when it becomes the current snap target.
\\n\\nThis implementation is done by using the scrollsnapchange
event to dynamically apply a snap-active
style class directly to the current carousel slide. This smoothly adds animation to the carousel thereby creating a better engaging presentation.
See the Pen
\\nCarousel with precision snap control by coded_fae (@coded_fae)
\\non CodePen.
Here is the step-by-step process of how the scrollsnapchange
event is used:
// Select the carousel container\\nconst carousel = document.getElementById(\\"carousel\\");\\n// Get all slides within the carousel\\nconst slides = carousel.querySelectorAll(\\".carousel-slide\\");\\n\\n// This adds an event listener to the scroll container for the \\"scrollsnapchange\\" event \\ncarousel.addEventListener(\\"scrollsnapchange\\", (event) => {\\n // Get the target slide that snapped into place (horizontally scrolled)\\n const snapTarget = event.snapTargetInline;\\n\\n // This part updates the current slide for the navigation buttons\\n const slideWidth = carousel.clientWidth;\\n currentSlide = Math.round(carousel.scrollLeft / slideWidth);\\n\\n // Remove \'snap-active\' class from previously active slides\\n const currentlySnapping = document.querySelector(\\".snap-active\\");\\n if (currentlySnapping) {\\n currentlySnapping.classList.remove(\\"snap-active\\");\\n }\\n // Add \'snap-active\' class to newly snapped slide to trigger animations\\n snapTarget.classList.add(\\"snap-active\\");\\n});\\n\\n
In CSS, the snap-active
class is combined with the element’s existing class name. When the scrollsnapchange
event triggers and adds this class to the element’s class list, the corresponding CSS rules are immediately applied, dynamically updating the slide’s appearance and animating its content in real-time:
.carousel-slide.snap-active {\\n opacity: 1;\\n scale: 1;\\n}\\n.carousel-slide.snap-active .content {\\n opacity: 1;\\n transform: translateY(0);\\n}\\n\\n
This use case functions similarly to how users interact with Instagram Reels, YouTube Shorts, and TikTok videos. It consists of a series of short videos that play automatically when snapped into view and pause when transitioning to another video.
\\nBy using the scrollsnapchanging
and scrollsnapchange
events simultaneously, the implementation precisely controls video playback, ensuring that only the currently visible video is playing while others remain paused.
See the Pen
\\nAuto-Play Video Trailers on Snap by coded_fae (@coded_fae)
\\non CodePen.
Here is how the events were used:
\\nconst snapContainer = document.querySelector(\\".snap-container\\");\\nconst trailers = document.querySelectorAll(\\"video\\");\\n\\nsnapContainer.addEventListener(\\"scrollsnapchange\\", (event) => {\\n // Get the video element of the currently snapped item\\n const visibleTrailer = event.snapTargetBlock.children[0];\\n if (visibleTrailer) {\\n // Play the visible trailer when it snaps into view\\n visibleTrailer.play();\\n }\\n});\\n\\nsnapContainer.addEventListener(\\"scrollsnapchanging\\", (event) => {\\n const visibleTrailer = event.snapTargetBlock.children[0];\\n if (visibleTrailer) {\\n // Pause the currently visible trailer when transitioning to another snap\\n visibleTrailer.pause();\\n }\\n});\\n\\n
In this use case, the video element is not directly nested within the snapTargetBlock
. Therefore, it is necessary to access the children
of snapTargetBlock
to access the video element:
const visibleTrailer = event.snapTargetBlock.children[0];\\n\\n
This use case highlights a dynamic and interactive page that showcases different “snap animations”. These animations are triggered as sections snap into view, and when a section is scrolled out of view, its corresponding animations are smoothly removed. This approach creates a unique and engaging scrollytelling experience.
\\nIn the demo below, we have a landing page divided into four distinct sections, each featuring unique animations that are revealed as you scroll. This is achieved using the scrollsnapchanging
and scrollsnapchange
events, which dynamically add or remove the animations based on the scrolling behavior.
<br />\\nSee the Pen <a href=\\"https://codepen.io/coded_fae/pen/VYZjJyM\\"><br />\\nDynamic Scrollytelling</a> by abiolaesther_ (<a href=\\"https://codepen.io/coded_fae\\">@coded_fae</a>)<br />\\non <a href=\\"https://codepen.io\\">CodePen</a>.<br />\\n
\\nHow the logic works:
\\nconst scrollContainer = document.querySelector(\\".container\\");\\n\\n// Handle the scrollsnapchange event\\nscrollContainer.addEventListener(\\"scrollsnapchange\\", (event) => {\\n // Get the current snap target block\\n const snapTarget = event.snapTargetBlock;\\n\\n // Add the \\"active\\" class to the first child element (if it exists)\\n if (snapTarget.children[0]) {\\n snapTarget.children[0].classList.add(\\"active\\");\\n }\\n\\n // Add the \\"active\\" class to the second child element (if it exists)\\n if (snapTarget.children[1]) {\\n snapTarget.children[1].classList.add(\\"active\\");\\n }\\n});\\n\\nscrollContainer.addEventListener(\\"scrollsnapchanging\\", (event) => {\\n // Get the current snap target block\\n const snapTarget = event.snapTargetBlock;\\n\\n // Remove the \\"active\\" class from the first child element (if it exists)\\n if (snapTarget.children[0]) {\\n snapTarget.children[0].classList.remove(\\"active\\");\\n }\\n\\n // Remove the \\"active\\" class from the second child element (if it exists)\\n if (snapTarget.children[1]) {\\n snapTarget.children[1].classList.remove(\\"active\\");\\n }\\n});\\n\\n
In a similar pattern to the earlier implementation, we need to access all child elements to apply different animations to them individually.
\\nNow, let’s take a look at the CSS for the portfolio section:
\\n@keyframes slideDown {\\n 0% {\\n opacity: 0;\\n transform: translateY(-100%);\\n }\\n 100% {\\n opacity: 1;\\n transform: translateY(0);\\n }\\n}\\n\\n@keyframes slideUp {\\n 0% {\\n opacity: 0;\\n transform: translateY(100%);\\n }\\n 100% {\\n opacity: 1;\\n transform: translateY(0);\\n }\\n}\\n\\n.portfolio-content.active {\\n animation: slideDown 1s ease-out forwards;\\n}\\n\\n.portfolio-grid.active img {\\n animation: slideUp 1s ease-out forwards;\\n}\\n\\n.portfolio-grid.active img:nth-child(1) {\\n animation-delay: 0.2s;\\n}\\n.portfolio-grid.active img:nth-child(2) {\\n animation-delay: 0.4s;\\n}\\n.portfolio-grid.active img:nth-child(3) {\\n animation-delay: 0.6s;\\n}\\n\\n
We created two different animations: slideup
and slidedown
. These animations were added to the active
class alongside the respective element’s class name.
To target individual child elements, we utilized the:nth-child()
pseudo-class. This approach allowed us to create a smooth animation flow.
As far back as 2022, scroll snapping relied solely on CSS for defining snap points and styling snap targets. While CSS provides a clean and declarative way to enable snapping through properties like scroll-snap-type
and scroll-snap-align
, it lacks dynamic control over the snap behavior and styling.
This gap becomes evident in more complex scenarios, such as implementing carousels or galleries where JavaScript is often needed to manage state, track interactions, and apply custom styles.
\\nThe introduction of JavaScript scroll snap events bridges this gap. These events provide real-time predictions and dynamic interactions with snap targets, enabling the addition of custom behaviors and animations that go beyond what CSS alone can achieve.
\\nFor instance, you can use scroll-snap-type
to control the snapping behavior and the scroll-snap-align
to determine the relevant properties returned by the SnapEvent
object:
block
axis — snapTargetBlock
references the snapped elementinline
axis — snapTargetInline
references the snapped elementboth
axes — Both properties return the respective snapped elementsBy combining CSS scroll snap properties with JavaScript scroll events, you can create an enhanced scrolling experience while maintaining smooth, intuitive interactions, as demonstrated in the examples above.
\\nModern browsers are beginning to support these new events, but currently they’re limited to browsers with Chrome version 129 and above and Edge.
\\n\\n
Before the introduction of these JavaScript scroll events, the Intersection Observer API was used to track elements as they crossed the scroll port and identify which element was the current snap target based on how much of the viewport was filled by the element.
\\nHowever, this approach was limited and didn’t have real-time updates on when and how the snap target was changing, making it less effective for more complex scroll-driven interfaces.
\\nHere’s a brief comparison of scroll snap events and the Intersection Observer API, outlining their key differences and advantages:
\\n\\nFeature | \\nScroll snap events | \\nIntersection Observer API | \\n
---|---|---|
Primary use case | \\nUsed to identify precise scroll snap points by tracking target changes during scroll interactions | \\nUsed to detect when an element enters, leaves or intersects with a specified viewport container | \\n
Ease of implementation | \\nEasier to implement for scroll-specific interactions | \\nRequires more configurations and effort to set up but provides broader capabilities beyond scroll snapping | \\n
Supported events | \\nscrollsnapchanging \\n scrollsnapchange | \\nCustom callbacks triggered on visibility changes | \\n
Styling snap targets | \\nDirectly identifies and allows styling snap targets | \\nNeeds custom logic to determine snap targets | \\n
Cross-browser suppor* | \\nLimited; currently supported in Chrome 129+ and Edge (Chromium-based). Not yet available in Safari or Firefox | \\nBroad support in modern browsers | \\n
Identifying when a section has been snapped into view or is about to be snapped, and then customizing that section or adding additional functionality at that precise snap point, is a valuable when used efficiently.
\\nThis article introduce these events and provides a good understanding of how they can be used to implement unique use cases easily and without the need for complex logic.
\\nMoving forward, you can build on this foundation to develop more advanced use cases and functionalities.
\\nI hope you found this tutorial helpful! Feel free to contact me on X if you have any questions. Happy coding!
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\n@PHP-Open-Source-Saver/jwt-auth
and @tymondesigns/jwt-auth
\\n User
model\\n AuthController
\\n Todo
model, controller, and migration\\n Todo
model\\n TodoController
\\n Editor’s note: This article was last reviewed and updated by Rosario De Chiara in January 2025. Updates include compatibility with Laravel 11 and a new demo showcasing how to restrict users from logging into multiple devices simultaneously.
\\nJSON web token (JWT) authentication is used to verify ownership of JSON data and determine whether the data can be trusted. JWT is not encryption — it’s an open standard that enables information to be securely transmitted between two parties as a JSON object. They’re digitally signed using either a public/private key pair or a secret.
\\nIn this article, we’ll demonstrate the process of implementing JWT authentication in Laravel 11. We’ll also review some of Laravel’s features and compare JWT to Laravel’s inbuilt authentication packages, Sanctum and Passport.
\\nBefore jumping into the demo, let’s cover a brief overview of Laravel.
\\nLaravel is a free, open source PHP web framework built by Taylor Otwell based on the Symfony framework. It is designed for building online applications that follow the Model-View-Controller (MVC) architectural paradigm.
\\nPHP frameworks are often favored by newer developers, as PHP is well-documented and has an active resource community. Laravel is the most popular PHP framework and is often the framework of choice for both new and seasoned developers. It is used to build standard business applications as well as enterprise-level apps.
\\nLaravel remains one of the most popular backend frameworks. Here are some reasons developers like building with Laravel:
\\nChoosing the type of authentication to use in your Laravel application will depend on the type of application you’re building.
\\nSanctum offers both session-based and token-based authentication and is good for single-page application (SPA) authentications. Passport uses JWT authentication as its standard but also implements full OAuth 2.0 authorization.
\\nOAuth allows authorization from third-party applications like Google, GitHub, and Facebook, but not every app requires this feature. If you want to implement token-based authentication that follows the JWT standard, without the OAuth extras, then Laravel JWT authentication is your best bet.
\\nLaravel offers many built-in authentication mechanisms to meet different application needs, ranging from traditional web apps to APIs. They include Laravel Breeze, Laravel Fortify, and Laravel Sanctum.
\\nAlthough Laravel offers these built-in authentication frameworks, it does not support JWT out of the box. So to add JWT authentication to Laravel, developers use third-party packages such as PHP-Open-Source-Saver/jwt-auth
.
Below is an overview of how JWT can be integrated with Laravel’s built-in authentication mechanisms:
\\nNow, let’s take a look at how to implement JWT authentication in Laravel 11. The full code for this project is available on GitHub. Feel free to fork and follow along.
\\nThis tutorial is designed as a hands-on demonstration. Before getting started, make sure you’ve met the following requirements:
\\nIt’s also important to understand PHP version compatibility when working with PHP frameworks and packages to maintain a stable and secure application environment. Let’s explore this in more detail next.
\\nPHP versions dictate which features and syntaxes are supported, which affects how PHP frameworks like Laravel and its packages work, including JWT authentication libraries. Here is how version compatibility can impact Laravel applications:
\\nHere are a couple of steps you should take to ensure PHP version compatibility:
\\nWe’ll get started by creating a new Laravel 11 project. Install and navigate to the new Laravel project using these commands:
\\ncomposer create-project laravel/laravel laravel-jwt\\ncd laravel-jwt\\n\\n
Next, create a MySQL database named laravel-jwt
. For this demo, I’m using XAMMP, but any database management system will suffice.
To allow our Laravel application to interact with the newly formed database, we must first establish a connection. To do so, we’ll need to add our database credentials to the .env
file:
DB_CONNECTION=mysql\\nDB_HOST=127.0.0.1\\nDB_PORT=3306\\nDB_DATABASE=laravel-jwt\\nDB_USERNAME=root\\nDB_PASSWORD=\\n\\n
The User
table migration comes preinstalled in Laravel, so all we have to do is run it to create the table in our database. To create the User
table, use the following command:
php artisan migrate\\n\\n
Now that our database is set up, we’ll install and set up the Laravel JWT authentication package. We’ll be using php-open-source-saver/jwt-auth
, a fork of tymondesign/jwt-auth
, because tymondesign/jwt-auth
appears to have been abandoned and isn’t compatible with Laravel 11.
@PHP-Open-Source-Saver/jwt-auth
and @tymondesigns/jwt-auth
Previously, Sean Tymon’s @tymondesigns/jwt-auth
package was the standard package for integrating JWT authentication in Laravel and Luman applications. However, the package is not being actively updated or maintained.
As a result, @PHP-Open-Source-Saver/jwt-auth
was developed as a forked package to replace it and continue the development and support needed by modern Laravel applications. It maintains the same API to ease the migration process for existing users of the original package while introducing new features and improvements.
The key difference lies in the active development and support. @PHP-Open-Source-Saver/jwt-auth
benefits from regular updates that address both security concerns and compatibility with the latest Laravel versions.
Install the newest version of the package using this command:
\\ncomposer require php-open-source-saver/jwt-authh\\n\\n
Next, we need to make the package configurations public. Copy the JWT configuration file from the vendor to confi/jwt.php
with this command:
php artisan vendor:publish --provider=\\"PHPOpenSourceSaver\\\\JWTAuth\\\\Providers\\\\LaravelServiceProvider\\"\\n\\n
Now, we need to generate a secret key to handle the token encryption. To do so, run this command:
\\nphp artisan jwt:secret\\n\\n
This will update our .env
file with something like this:
JWT_SECRET=xxxxxxxx\\n\\n
This is the key that will be used to sign our tokens.
\\n\\nInside the config/auth.php
file, we’ll need to make a few changes to configure Laravel to use the JWT AuthGuard
to power the application authentication.
First, we’ll make the following changes to the file:
\\n\'defaults\' => [\\n \'guard\' => \'api\',\\n \'passwords\' => \'users\',\\n ],\\n\\n \'guards\' => [\\n \'web\' => [\\n \'driver\' => \'session\',\\n \'provider\' => \'users\',\\n ],\\n\\n \'api\' => [\\n \'driver\' => \'jwt\',\\n \'provider\' => \'users\',\\n ],\\n\\n ],\\n\\n
In this code, we’re telling the API guard
to use the JWT driver
and to make the API guard
the default. Now, we can use Laravel’s inbuilt authentication mechanism, with jwt-auth
handling the heavy lifting!
User
modelIn order to implement the PHPOpenSourceSaverJWTAuthContractsJWTSubject
contract on our User
model, we’ll use two methods: getJWTCustomClaims()
and getJWTIdentifier()
.
Replace the code in the app/Models/User.php
file with the following:
namespace App\\\\Models;\\nuse Illuminate\\\\Contracts\\\\Auth\\\\MustVerifyEmail;\\nuse Illuminate\\\\Database\\\\Eloquent\\\\Factories\\\\HasFactory;\\nuse Illuminate\\\\Foundation\\\\Auth\\\\User as Authenticatable;\\nuse Illuminate\\\\Notifications\\\\Notifiable;\\nuse PHPOpenSourceSaver\\\\JWTAuth\\\\Contracts\\\\JWTSubject;\\n\\nclass User extends Authenticatable implements JWTSubject {\\n use HasFactory, Notifiable;\\n\\n /**\\n * The attributes that are mass assignable.\\n *\\n * @var array<int, string>\\n */\\n protected $fillable = [\\n \'name\',\\n \'email\',\\n \'password\',\\n ];\\n\\n /**\\n * The attributes that should be hidden for serialization.\\n *\\n * @var array<int, string>\\n */\\n protected $hidden = [\\n \'password\',\\n \'remember_token\',\\n ];\\n\\n /**\\n * The attributes that should be cast.\\n *\\n * @var array<string, string>\\n */\\n protected $casts = [\\n \'email_verified_at\' => \'datetime\',\\n ];\\n\\n /**\\n * Get the identifier that will be stored in the subject claim of the JWT.\\n *\\n * @return mixed\\n */\\n public function getJWTIdentifier()\\n {\\n return $this->getKey();\\n }\\n\\n /**\\n * Return a key value array, containing any custom claims to be added to the JWT.\\n *\\n * @return array\\n */\\n public function getJWTCustomClaims()\\n {\\n return [];\\n }\\n\\n}\\n\\n
That’s it for our model setup!
\\nAuthController
Now, we’ll create a controller to handle the core logic of the authentication process. First, we’ll run this command to generate the controller:
\\nphp artisan make:controller AuthController\\n\\n
Then, we’ll replace (in the /app/Http/Controllers/AuthController.php
file) the controller’s contents with the following code snippet:
namespace App\\\\Http\\\\Controllers;\\nuse Illuminate\\\\Http\\\\Request;\\nuse Illuminate\\\\Support\\\\Facades\\\\Auth;\\nuse Illuminate\\\\Support\\\\Facades\\\\Hash;\\nuse App\\\\Models\\\\User;\\n\\nclass AuthController extends Controller\\n{\\n\\n public function __construct()\\n {\\n $this->middleware(\'auth:api\', [\'except\' => [\'login\',\'register\']]);\\n }\\n\\n public function login(Request $request)\\n {\\n $request->validate([\\n \'email\' => \'required|string|email\',\\n \'password\' => \'required|string\',\\n ]);\\n $credentials = $request->only(\'email\', \'password\');\\n\\n $token = Auth::attempt($credentials);\\n if (!$token) {\\n return response()->json([\\n \'status\' => \'error\',\\n \'message\' => \'Unauthorized\',\\n ], 401);\\n }\\n\\n $user = Auth::user();\\n return response()->json([\\n \'status\' => \'success\',\\n \'user\' => $user,\\n \'authorisation\' => [\\n \'token\' => $token,\\n \'type\' => \'bearer\',\\n ]\\n ]);\\n\\n }\\n\\n public function register(Request $request){\\n $request->validate([\\n \'name\' => \'required|string|max:255\',\\n \'email\' => \'required|string|email|max:255|unique:users\',\\n \'password\' => \'required|string|min:6\',\\n ]);\\n\\n $user = User::create([\\n \'name\' => $request->name,\\n \'email\' => $request->email,\\n \'password\' => Hash::make($request->password),\\n ]);\\n\\n $token = Auth::login($user);\\n return response()->json([\\n \'status\' => \'success\',\\n \'message\' => \'User created successfully\',\\n \'user\' => $user,\\n \'authorisation\' => [\\n \'token\' => $token,\\n \'type\' => \'bearer\',\\n ]\\n ]);\\n }\\n\\n public function logout()\\n {\\n Auth::logout();\\n return response()->json([\\n \'status\' => \'success\',\\n \'message\' => \'Successfully logged out\',\\n ]);\\n }\\n\\n public function refresh()\\n {\\n return response()->json([\\n \'status\' => \'success\',\\n \'user\' => Auth::user(),\\n \'authorisation\' => [\\n \'token\' => Auth::refresh(),\\n \'type\' => \'bearer\',\\n ]\\n ]);\\n }\\n\\n}\\n\\n
Here’s a quick explanation of the public functions in the AuthController
:
constructor
: We establish this function in our controller
class so that we can use the auth:api
middleware within it to block unauthenticated access to certain methods within the controllerlogin
: This method authenticates a user with their email and password. When a user is successfully authenticated, the Auth
facade attempt()
method returns the JWT token. The generated token is retrieved and returned as JSON with the user objectregister
: This method creates the user record and logs in the user with token generationslogout
: This method invalidates the user Auth
tokenrefresh
: This method invalidates the user Auth
token and generates a new tokenWe’re done with setting up our JWT authentication!
\\nIf that’s all you’re here for, you can skip to the test application section. But, for the love of Laravel, let’s add a simple to-do feature to our project!
\\nTodo
model, controller, and migrationWe’ll create the Todo
model, controller, and migration all at once with the following command:
php artisan make:model Todo -mc\\n\\n
Next, go to the database/migrations/xxxx_xx_xx_xxxxxx_create_todos_table.php
file, and replace its contents with the following code:
use Illuminate\\\\Database\\\\Migrations\\\\Migration;\\nuse Illuminate\\\\Database\\\\Schema\\\\Blueprint;\\nuse Illuminate\\\\Support\\\\Facades\\\\Schema;\\n\\nreturn new class extends Migration\\n{\\n /**\\n * Run the migrations.\\n *\\n * @return void\\n */\\n public function up()\\n {\\n Schema::create(\'todos\', function (Blueprint $table) {\\n $table->id();\\n $table->string(\'title\');\\n $table->string(\'description\');\\n $table->timestamps();\\n });\\n }\\n\\n /**\\n * Reverse the migrations.\\n *\\n * @return void\\n */\\n public function down()\\n {\\n Schema::dropIfExists(\'todos\');\\n }\\n};\\n\\n
Todo
modelNow, navigate to the app/Models/Todo.php
file, and replace its contents with the following code:
namespace App\\\\Models;\\n\\nuse Illuminate\\\\Database\\\\Eloquent\\\\Factories\\\\HasFactory;\\nuse Illuminate\\\\Database\\\\Eloquent\\\\Model;\\n\\nclass Todo extends Model\\n{\\n use HasFactory;\\n protected $fillable = [\'title\', \'description\'];\\n\\n}\\n\\n
Now we need the migrate command to create the new model for the to-dos in the DB:
\\nphp artisan migrate\\n\\n
TodoController
Next, go to the app/Http/Controllers/TodoController.php
file, and replace its contents with the following code:
namespace App\\\\Http\\\\Controllers;\\n\\nuse Illuminate\\\\Http\\\\Request;\\nuse App\\\\Models\\\\Todo;\\nuse Illuminate\\\\Routing\\\\Controller;\\n\\nclass TodoController extends Controller\\n{\\n public function __construct()\\n {\\n $this->middleware(\'auth:api\');\\n }\\n\\n public function index()\\n {\\n $todos = Todo::all();\\n return response()->json([\\n \'status\' => \'success\',\\n \'todos\' => $todos,\\n ]);\\n }\\n\\n public function store(Request $request)\\n {\\n $request->validate([\\n \'title\' => \'required|string|max:255\',\\n \'description\' => \'required|string|max:255\',\\n ]);\\n\\n $todo = Todo::create([\\n \'title\' => $request->title,\\n \'description\' => $request->description,\\n ]);\\n\\n return response()->json([\\n \'status\' => \'success\',\\n \'message\' => \'Todo created successfully\',\\n \'todo\' => $todo,\\n ]);\\n }\\n\\n public function show($id)\\n {\\n $todo = Todo::find($id);\\n return response()->json([\\n \'status\' => \'success\',\\n \'todo\' => $todo,\\n ]);\\n }\\n\\n public function update(Request $request, $id)\\n {\\n $request->validate([\\n \'title\' => \'required|string|max:255\',\\n \'description\' => \'required|string|max:255\',\\n ]);\\n\\n $todo = Todo::find($id);\\n $todo->title = $request->title;\\n $todo->description = $request->description;\\n $todo->save();\\n\\n return response()->json([\\n \'status\' => \'success\',\\n \'message\' => \'Todo updated successfully\',\\n \'todo\' => $todo,\\n ]);\\n }\\n\\n public function destroy($id)\\n {\\n $todo = Todo::find($id);\\n $todo->delete();\\n\\n return response()->json([\\n \'status\' => \'success\',\\n \'message\' => \'Todo deleted successfully\',\\n \'todo\' => $todo,\\n ]);\\n }\\n}\\n\\n
To access our newly created methods, we need to define our API routes. Navigate to the routes/api.php
file and replace the contents with the following code:
use Illuminate\\\\Http\\\\Request;\\nuse Illuminate\\\\Support\\\\Facades\\\\Route;\\nuse App\\\\Http\\\\Controllers\\\\AuthController;\\nuse App\\\\Http\\\\Controllers\\\\TodoController;\\n\\nRoute::controller(AuthController::class)->group(function () {\\n Route::post(\'login\', \'login\');\\n Route::post(\'register\', \'register\');\\n Route::post(\'logout\', \'logout\');\\n Route::post(\'refresh\', \'refresh\');\\n\\n});\\n\\nRoute::controller(TodoController::class)->group(function () {\\n Route::get(\'todos\', \'index\');\\n Route::post(\'todo\', \'store\');\\n Route::get(\'todo/{id}\', \'show\');\\n Route::put(\'todo/{id}\', \'update\');\\n Route::delete(\'todo/{id}\', \'destroy\');\\n}); \\n\\n
In the above code, we’re using Laravel 9 syntax. You’ll need to declare your route the normal way if you’re using lower versions of Laravel.
\\nBefore we move to Postman to start testing the API endpoints, we need to start our Laravel application. Run the following command to do so:
\\nphp artisan serve\\n\\n
To start the Postman application, add the registration API (localhost:8000/api/register) in the address bar, select the POST HTTP request method from the dropdown, choose the form-data option on the Body tab, and select the name, email, and password input fields. In the repository, you can find a Postman collection of the API calls to easily test each endpoint.
\\nThen, click Send to see the server response (see figure below):
\\nIn the previous step, we created an account in the Postman application. To log in, use http://localhost:8000/api/login, set a POST method, add the email and password to the input field, and click Send to see the response (see figure below):
\\nThe refresh
, logout
, and todo
endpoints are all protected by the auth:api
middleware and therefore require that we send a valid token with the authorization header.
To copy the token from our login response, select Bearer Token from the dropdown on the Authorization tab, paste the copied token into the Token field, and click Send to refresh the API:
\\nNow that you have an authorization token, add the token in the request header and create a to-do as shown below:
\\nNow, test other endpoints to ensure they are working correctly.
\\nIn this section, we’ll explore JWTs a bit more by implementing a mechanism to prevent multiple simultaneous logins for a single user.
\\nThe idea is to extend the user profile with a counter that acts as a “generation” number for the token generated at login time. When a token is generated, it is assigned the current generation value from the database. Any token with a generation number older than the current value in the database is deemed invalid. Essentially, each new login invalidates all previously issued tokens. Let’s walk through the steps to implement this mechanism.
\\nFirst, modify the User
model by adding the token_version
field to track versions and include this version in JWT claims through getJWTCustomClaims()
:
class User extends Authenticatable implements JWTSubject\\n{\\n...\\n protected $fillable = [\\n \'name\',\\n \'email\',\\n \'password\',\\n \'token_version\' // Add this field to track token versions\\n ];\\n...\\n public function getJWTCustomClaims() {\\n // Include token version in JWT claims\\n return [\\n \'token_version\' => $this->token_version \\n ];\\n }\\n...\\n}); \\n\\n
At this point, we need to create a new migration to add a token_version
field to the users table in the database. This field will be initialized with a default value of 0
:
return new class extends Migration \\n{\\n public function up(): void \\n {\\n Schema::table(\'users\', function (Blueprint $table) {\\n $table->integer(\'token_version\')->default(0);\\n });\\n }\\n public function down(): void\\n {\\n Schema::table(\'users\', function (Blueprint $table) {\\n $table->dropColumn(\'token_version\');\\n });\\n }\\n}; \\n\\n
Now we’ll extend the AuthController
to embed the token_version
logic into the JWT:
token_version
and increment it in the DBtoken_version
class AuthController extends Controller\\n{\\n ... \\n public function login(Request $request)\\n {\\n // First increment the token version\\n $user = Auth::getProvider()->retrieveByCredentials($credentials);\\n $user->token_version += 1;\\n $user->save();\\n // Then generate the token - this will include the new version in the payload\\n $token = auth()->attempt($credentials);\\n return response()->json([\\n \'status\' => \'success\',\\n \'user\' => $user,\\n \'authorisation\' => [\\n \'token\' => $token,\\n \'type\' => \'bearer\',\\n ]\\n ]);\\n ... \\n }\\n}\\n
The final step is to add a brand new middleware (app\\\\Http\\\\Middleware\\\\CheckTokenVersion.php
) that verifies that the token_version
in the API call matches the version in the user profile:
namespace App\\\\Http\\\\Middleware;\\nuse Closure;\\nuse Illuminate\\\\Support\\\\Facades\\\\Log;\\nuse Illuminate\\\\Http\\\\Request;\\nuse PHPOpenSourceSaver\\\\JWTAuth\\\\Facades\\\\JWTAuth;\\nuse Symfony\\\\Component\\\\HttpFoundation\\\\Response;\\nclass CheckTokenVersion\\n{\\n public function handle(Request $request, Closure $next): Response\\n {\\n try {\\n $token = JWTAuth::parseToken();\\n $payload = $token->getPayload();\\n $user = auth()->user();\\n\\n if ($payload->get(\'token_version\') !== $user->token_version) {\\n return response()->\\n json([\'error\' => \'Token has been invalidated\'], 401);\\n }\\n return $next($request);\\n } catch (\\\\Exception $e) {\\n return response()->json([\'error\' => \'Invalid token\'], 401);\\n }\\n }\\n}\\n\\n
Once the middleware is in place, we must apply it to the APIs that interact with the to-dos; this is possible by modifying the routes\\\\api.php
file:
...\\nuse Illuminate\\\\Http\\\\Request;\\nuse Illuminate\\\\Support\\\\Facades\\\\Route;\\nuse App\\\\Http\\\\Controllers\\\\AuthController;\\nuse App\\\\Http\\\\Controllers\\\\TodoController;\\nuse App\\\\Http\\\\Middleware\\\\CheckTokenVersion;\\n...\\nRoute::middleware([CheckTokenVersion::class])->group(function () {\\n Route::controller(TodoController::class)->group(function () {\\n Route::get(\'todos\', \'index\');\\n Route::post(\'todo\', \'store\');\\n Route::get(\'todo/{id}\', \'show\');\\n Route::put(\'todo/{id}\', \'update\');\\n Route::delete(\'todo/{id}\', \'destroy\');\\n });\\n});\\n...\\n\\n
In this new version, we just specify that every call to the to-dis API will be intercepted by the CheckTokenVersion
middleware with the logic described before.
Integrating JWT into your application comes with several crucial security considerations. It’s important to address these concerns to ensure your implementation is secure and efficient.
\\nLet’s dive into the key areas you need to focus on the approaches to secure your application with JWT:
\\nHttpOnly
cookies, which restrict JavaScript access and diminish the risk of XSS (Cross-Site Scripting) attacksWhile these are important considerations, testing and debugging your Laravel app is also critical to security. We’ll learn more in the next section.
\\nTo ensure your JWT authentication correctly secures your application, it must be properly tested against all possible vulnerabilities. This will improve your application’s security and performance.
\\nHere are several ways to test and troubleshoot JWT authentication in a Laravel application:
\\nThis involves attempting to see the secret key used to sign the JWT token. If you can see the key, it means hackers can also generate valid tokens using those keys and gain unauthorized access to the application. You can debug this by using complex secret keys or regularly changing your secret keys to limit the time window an attacker has to crack them
\\nSome JWT implementations may accept tokens signed with the “none” algorithm by default, which will automatically bypass the signature verification process. Make sure your JWT validation logic explicitly rejects tokens that specify “none” as their signing algorithm to enforce signature verification for all the generated tokens.
\\nHackers try to change the algorithm specified in the token header — for example, by changing from RSA to HMAC — to trick the server into validating a token with a completely different key to gain access to the application. You can debug this by verifying and enforcing the expected algorithm in your server’s token validation, as well as implementing mechanisms that prevent the server from accepting tokens signed with an unauthorized algorithm.
\\nIf an asymmetric method such as RSA is used in a JWT and the public key is exposed, a hacker may attempt to sign a new token using the same symmetric algorithm and the public key as the secret key. You can avoid this problem by including tight checks in your validation logic to ensure that the token’s signature algorithm matches the server’s expected asymmetric method, and also reject any tokens signed with a different method, particularly using the symmetric method.
\\nThis article discussed the benefits of building with Laravel and compared JWT authentication to Sanctum and Passport, Laravel’s inbuilt authentication packages.
\\nWe also built a demo project to show how to create a REST API authentication with JWT in Laravel 11. We discussed some security best practices to keep your JWT authentication safe. We created a sample to-do application, connected the app to a database, and performed CRUD operations.
\\nTo learn more about Laravel, check out the official documentation.
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nstartTransition
function\\n useTransition
Hook\\n Editor’s note: This article was last reviewed and updated by Joseph Mawa in January 2025 to offer updated information since the release of React 19, including removing the previous implementation of React’s Concurrent Mode, which is natively enabled if you use createRoot
, the default rendering method in React 18 and above, as well as to offer more information about the useTransition
Hook.
React’s Transitions API was released as part of React 18’s Concurrent Mode. It prevents an expensive UI render from being executed immediately.
\\nTo understand why we need this feature, remember that forcing expensive UI renders to run immediately can block lighter, more urgent UI renders from rendering in time. This can frustrate users who need an immediate response from the urgent UI renders.
\\nAn example of an urgent UI render would be typing in a search bar. When you type, you want to see your typing manifested and begin searching immediately. If the app freezes and the searching stops, you risk frustrating your user. Other expensive UI renders can bog down the whole app, including the light UI renders that are supposed to be fast (like seeing search results as you type).
\\nBefore the introduction of the Transitions API in React, you could solve this problem by debouncing or throttling. Unfortunately, using debouncing or throttling can still cause an app to become unresponsive.
\\nReact’s startTransition
function allows you to mark certain updates in your app as non-urgent, so they are paused while more urgent updates are prioritized. This makes your app feel faster and can reduce the burden of rendering items in your app that are not strictly necessary. Therefore, no matter what you are rendering, your app is still responding to your user’s input.
In this article, we’ll learn how to use startTransition
in your React app to delay non-urgent UI updates and avoid blocking urgent UI updates. With this feature, you can convert your slow React app into a fast, responsive one in no time.
Before beginning the tutorial, you should have:
\\nLet’s begin by creating a React project using Create React App (CRA):
\\nnpx create-react-app starttransition_demo\\n\\n
The command above creates a React project using the latest stable version of React, which is version 19. You can also use Vite instead of CRA.
\\nAfter successfully creating the demo project, use the command below to start the development server:
\\nnpm run start\\n\\n
You can open the project in the browser on localhost
on port 3000. You should see the familiar default page of a React project with a rotating React logo.
Next, let’s create a React app with a light UI render and an expensive UI render. Open the src/App.js
file in the project we created above. If you created the project using CRA, you should see the App
functional component displaying a React logo, a p
tag, and a link.
Replace the App
functional component with the code below:
function App() {\\n const [search_text, setSearchText] = useState(\\"\\");\\n const [search_result, setSearchResult] = useState();\\n\\n const handleChange = (e) => {\\n setSearchText(e.target.value);\\n };\\n\\n useEffect(() => {\\n if (search_text === \\"\\") {\\n setSearchResult(null);\\n } else {\\n const rows = Array.from(Array(5000), (_, index) => {\\n return (\\n <div key={index}>\\n <img src={logo} className=\\"App-logo\\" alt=\\"logo\\" />\\n <div>\\n {index + 1}. {search_text}\\n </div>\\n </div>\\n );\\n });\\n\\n const list = <div>{rows}</div>;\\n setSearchResult(list);\\n }\\n }, [search_text]);\\n\\n return (\\n <div className=\\"App\\">\\n <header className=\\"App-header\\">\\n <div className=\\"SearchEngine\\">\\n <div className=\\"SearchInput\\">\\n <input type=\\"text\\" value={search_text} onChange={handleChange} />\\n </div>\\n <div className=\\"SearchResult\\">{search_result}</div>\\n </div>\\n </header>\\n </div>\\n );\\n}\\n\\n
Now, you need to import the useEffect
and useState
hooks. Add the import statement below at the top of the App.js
file:
import {useState, useEffect } from \'react\';\\n\\n
Here, we are creating the app’s UI, which consists of two parts: the search input and the search result.
\\nBecause the input has a callback, when you type the text in the input field, the text is passed as an argument to setSearchText
to update the value of search_text
using the useState
Hook. Then, the search result shows up. For this demo, the result is 5,000 rows where each row consists of a rotating React logo and the same search query text.
Our light and immediate UI render is the search input with its text. When you type some text in the search input field, the text should appear immediately. However, displaying 5,000 React logos and the search text is an expensive UI render.
\\nLet’s look at an example. Try typing “I love React very much” quickly in the app’s text input field. When you type “I,” the app renders the “I” text immediately in the search input. Then it renders the 5,000 rows. This takes a noticeably long time, which reveals our rendering problem.
\\nThe React app can’t render the full text immediately. The expensive UI render makes the light UI render slow as well. The overall UI becomes unresponsive.
\\n\\nYou can try it yourself on the app at localhost on port 3000. You’ll be presented with a search input field. I have set up a demo app for this. Here’s a quick visual of the sample React app with a search bar that reads “I love React very much”:
\\nWhat we want is for the expensive UI render not to drag the light UI render into the mud while it loads. They should be separated, which is where startTransition
and the Transitions API come in.
startTransition
functionLet’s see what happens when we turn the expensive state update into a non-blocking transition using the startTransition
function. We’ll start by importing it as seen in the code below:
import { useState, useEffect, startTransition } from \'react\';\\n\\n
Then, wrap the state updates for the expensive UI in the startTransition
function we imported. Change the setSearchResult(null)
and setSearchResult(list)
state updates in the useEffect
Hook to look like these:
useEffect(() => {\\n if (search_text === \\"\\") {\\n startTransition(() => {\\n setSearchResult(null);\\n });\\n } else {\\n ...\\n startTransition(() => {\\n setSearchResult(list);\\n });\\n }\\n}, [search_text]);\\n\\n
Now, you can test the app again. When you type something in the search field, the text is rendered immediately. After you stop (or a few seconds pass), the React app renders the search result.
\\nThe app remains responsive during the expensive UI update because we marked the setSearchResult
state update as non-blocking using the startTransition
function. Other urgent state updates can interrupt a non-blocking update. Therefore, rendering the expensive UI doesn’t block any interactions with the application.
You will notice that we used the startTransition
function in the useEffect
Hook but we never passed it as a dependency. This is because the startTransition
function has a stable identity. You can omit it from the list of dependencies as we did above. Adding it to the list of dependencies won’t trigger the effect either.
What if you want to display something on the search results while waiting for the expensive UI render to finish? You may want to display a progress bar to give immediate feedback to users so they know the app is working on their request. For this, we can use the isPending
variable that comes with the useTransition
Hook.
useTransition
HookBoth the useTransition
Hook and the startTransition
function are part of React’s Transitions API. Because useTransition
is a hook, you must follow the rules of hooks when using it. You can only use it at the top level of a functional component or custom hook. On the other hand, you can use the startTransition
function anywhere, including outside a React component.
The useTransition
Hook doesn’t take any argument and returns an array of two elements. The first element is the variable that tracks the transition states. Its value is true
if there is a pending transition; otherwise, it’s false
. You can use it to display a loading indicator when there is a pending transition. The second element is the startTransition
function you can use to mark non-blocking state updates:
const [isPending, startTransition] = useTransition();\\n\\n
Another difference between React’s useTransition
Hook and the startTransition
function we explored above is that useTransition
is capable of tracking the transition states while startTransition
cannot.
Now that we know what the useTransition
Hook is, let’s implement it in our project. First, change the import line at the top of the App.js
file into the code below:
import { useState, useEffect, useTransition } from \'react\';\\n\\n
Extract isPending
and startTransition
from the useTransition
Hook. Add the code below on the first line within the App
functional component:
const [isPending, startTransition] = useTransition();\\n\\n
Next, change the contents of <div className=\\"SearchResult\\">
to the code below:
{isPending && <div><br /><span>Loading...</span></div>}\\n{!isPending && search_result}\\n\\n
Now when you type the text in the search input very quickly, the loading indicator is displayed first:
\\nThe stable version of React 19 was released for production use in December 2024 and it included new features to the Transitions API.
\\nIn React 18, the actions callback function you passed to the startTransition
function had to be synchronous, like so:
startTransition(() => {\\n setState(newState);\\n});\\n\\n
However, in React 19, the startTransition
function callback can be asynchronous. This improvement to the React Transitions API makes it easier to handle the different app states while performing asynchronous operations such as form submission in a transition callback.
Before the introduction of the async actions callback, you needed to manage loading and error states yourself and update the UI accordingly. However, with the async actions callback, you can now easily track loading and error states as in the example below:
\\nconst [error, setError] = useState(null);\\nconst [isPending, startTransition] = useTransition();\\n\\nconst submitForm = () => {\\n startTransition(async () => {\\n const error = await updateUserDetails(details);\\n if (error) {\\n setError(error);\\n return;\\n }\\n\\n redirect(\\"/new-path\\");\\n });\\n};\\n\\n
In the code above, React will set isPending
to true
, then it will perform an async operation, and switch isPending
back to false
. Your UI will remain responsive throughout because these are non-blocking transitions.
React’s startTransition
function and the useTransition
Hook allow you to create smoother and more responsive React apps by separating immediate UI renders from non-urgent updates.
To get hands-on experience, check out the startTransition
demo app and experiment with its features. With these tools and insights, you’re well-equipped to build React apps that are not only functional but also smooth.
Happy coding!
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nUse()
hook\\n use client
and use server
directives\\n useOptimistic
Hook\\n forwardRef
\\n use()
hook for fetching\\n useEffect()
\\n use()
hook replacing useContext()
hook\\n useFormStatus()
hook\\n useActionState
hook\\n Hey, have you finally mastered some hooks in React? If yes, great, but I’m sorry — you’ll have to drop a few.
\\nYes, I know: React is fond of new hooks and ways of doing things with each version release, and I am not a fan of it.
\\nBut if it makes you feel any better, the few hooks you will be dropping in React 19 are actually a pain.
\\nSo, with that in mind, let’s go over the new features and improvements in React 19. We’ll play around with these new features and say our goodbyes to old hooks.
\\nChange is a byproduct of progress and innovation, and as React developers, we’ve been forced to adapt to these improvements over the years. However, I am more impressed with React 19 than I have ever been with previous versions.
\\nBefore we dive into its usage, let’s talk about the major changes we’ll see in React 19. One impressive and major change is that React will now use a compiler.
\\nWe’ve seen top frameworks copy important changes from each other, and just like Svelte, React will include a compiler. This will compile React code into regular JavaScript which will in turn massively improve its performance.
\\nThe compiler will bring React up to speed and reduce unnecessary rerenders. The compiler is presently used in production on Instagram.
\\nA lot of the features that follow are possible because of the compiler, and it will offer a lot of automation for things like lazy loading and code splitting, which means we don’t have to use React.lazy anymore.
\\nLet’s talk about those changes the compiler will bring beyond performance improvements.
\\nFor those who do not know the meaning of memoization, this is simply the optimization of components to avoid unnecessary re-renders. Doing this, you will employ useMemo()
and useCallback()
hooks. From experience, this has been an annoying process, and I will say I am so glad this is in the past now.
With memorization being automated, I think the compiler will do a much better job because, in large applications, it gets more confusing to figure out where we could use useMemo()
.
So yes, I am glad this has been taken out of hands, and we can say our goodbyes to the useMemo()
and useCallback()
hooks.
Use()
hookThe use()
hook means we won’t have to use the useContext()
hook, and it could temporarily replace the useEffect()
hook in a few cases. This hook lets us read and asynchronously load a resource such as a promise or a context. This can also be used in fetching data in some cases and also in loops and conditionals, unlike other hooks.
use client
and use server
directivesIf you’re a huge fan of Next.js, you are probably familiar with use client
and use server
directives. If this is new to you, just keep in mind that the use client
directive tells Next.js that this component should run on the browser, while use server
tells Next that this code should run on the server.
These directives have been available in Canary for a while now, and now we can now use them in React. Doing so comes with benefits like enhanced SEO, faster load time, and easier data fetching with the server. A very basic implementation could look like this:
\\nuse client
:
\'use client\';\\n\\nimport React, { useState } from \'react\';\\n\\nconst ClientComponent = () => {\\n const [count, setCount] = useState(0);\\n\\n return (\\n <div>\\n <p>Count: {count}</p>\\n <button onClick={() => setCount(count + 1)}>Increment</button>\\n </div>\\n );\\n};\\n\\nexport default ClientComponent;\\n\\n
use server
:
\'use server\';\\n\\nexport async function fetchData() {\\n const response = await fetch(\'https://api.example.com/data\');\\n const data = await response.json();\\n return data;\\n}\\n\\n
If you have used Remix or Next, you may be familiar with the action API. Before React 19, you would need a submit handler on a form and make a request from a function. Now we can use the action attribute on a form to handle the submission.
\\n\\nIt can be used for both server-side and client-side applications and also functions synchronously and asynchronously. With this, we may not be saying bye to old hooks, but we will have to welcome new ones like UseFormStatus()
and useActionState()
. We will dive more into them later.
Actions are automatically submitted within transition()
which will keep the current page interactive. While the action is processing, we can use async await in transitions, which will allow you to show appending UI with the isPending()
state of a transition.
useOptimistic
HookThis is another hook that is finally no longer experimental. It’s very user-friendly and is used to set a temporary optimistic update to a UI while we wait for the server to respond.
\\nYou know the scenario where you send a message on WhatsApp, but it hasn’t ticked twice due to network downtime? Yeah, this is exactly what the useOptimistic
Hook simplifies for you.
Using this hook in partnership with actions, you can optimistically set the state of the data on the client. You can find more details by reading useOptimistic Hook in React.
\\nDo you have a favorite package for SEO and metadata for React? Well, they fall under the things we will have to leave behind, as React 19 has built-in support for metadata such as titles, descriptions, and keywords, and you can put them anywhere in the component including server-side and client-side code:
\\nfunction BlogPost({ post }) {\\n return (\\n <article>\\n {/* Post Header */}\\n <header>\\n <h1>{post.title}</h1>\\n <p>\\n <strong>Author:</strong> \\n <a href=\\"https://twitter.com/joshcstory/\\" rel=\\"author\\" target=\\"_blank\\">\\n Josh\\n </a>\\n </p>\\n </header>\\n\\n {/* Meta Tags for SEO */}\\n <head>\\n <title>{post.title}</title>\\n <meta name=\\"author\\" content=\\"Josh\\" />\\n <meta name=\\"keywords\\" content={post.keywords.join(\\", \\")} />\\n </head>\\n\\n {/* Post Content */}\\n <section>\\n <p>\\n {post.content || \\"Eee equals em-see-squared...\\"}\\n </p>\\n </section>\\n </article>\\n );\\n}\\n\\n
Each of these meta tags in React 19 will serve a specific purpose in providing information about the page to browsers, search engines, and other web services.
\\nThese are some of the other interesting updates we will get to enjoy in React 19.
\\nYou probably have moments where you reload a page and you get unstyled content, and in moments like these, you’re rightfully confused. Thankfully, this shouldn’t be an issue anymore, because in React 19, asset loading will integrate with suspense making sure high-resolution images are ready before display.
\\nforwardRef
ref
will now be passed as a regular prop. Occasionally we use forwardRef
which lets your component expose a dom node to a parent component with a ref
; this is no longer needed as ref
will just be a regular prop.
Lastly, React will have better support for web components, which will help you build reusable components. So, let’s get into the practical use cases so we have a better idea of how exactly React 19 will help us build faster websites.
\\nThe first on our list will be Use()
. There are no particular reasons to why these hooks are explained in this order, I only tried to explain them in a way it will be undersood better. Going further we will see how they are used:
use()
hook for fetchingAs you can tell, I’m very funny, and you might wonder how I’m able to come up with so much humor. Let me show you how you can create a small joke application with use()
so you can be funny too. This app will fetch new jokes whenever we refresh the page.
First, we will use the useEffect()
hook, and afterwards I’ll show how short when using the use()
hook:
useEffect()
import { useEffect, useState } from \\"react\\";\\n\\nconst JokeItem = ({ joke }) => {\\n return (\\n <div className=\\"bg-gradient-to-br from-orange-100 to-blue-100 rounded-2xl shadow-lg p-8 transition-all duration-300\\">\\n <p className=\\"text-xl text-gray-800 font-medium leading-relaxed\\">\\n {joke.value}\\n </p>\\n </div>\\n );\\n};\\n\\nconst Joke = () => {\\n const [joke, setJoke] = useState(null);\\n const [loading, setLoading] = useState(true);\\n\\n const fetchJoke = async () => {\\n setLoading(true);\\n try {\\n const res = await fetch(\\"https://api.chucknorris.io/jokes/random\\");\\n const data = await res.json();\\n setJoke(data);\\n } catch (error) {\\n console.error(\\"Failed to fetch joke:\\", error);\\n } finally {\\n setLoading(false);\\n }\\n };\\n\\n useEffect(() => {\\n fetchJoke();\\n }, []);\\n\\n const refreshPage = () => {\\n window.location.reload();\\n };\\n\\n return (\\n <div className=\\"min-h-[300px] flex flex-col items-center justify-center p-6\\">\\n <div className=\\"w-full max-w-2xl\\">\\n {loading ? (\\n <h2 className=\\"text-2xl text-center font-bold mt-5\\">Loading...</h2>\\n ) : (\\n <JokeItem joke={joke} />\\n )}\\n <button\\n onClick={refreshPage}\\n className=\\"mt-8 w-full bg-gradient-to-r from-orange-500 to-orange-600 hover:from-orange-600 hover:to-orange-700 text-white font-bold py-3 px-6 rounded-xl shadow-lg transform transition-all duration-200 hover:scale-[1.02] active:scale-[0.98] flex items-center justify-center gap-2\\"\\n >\\n Reload To Fetch New Joke\\n </button>\\n </div>\\n </div>\\n );\\n};\\n\\nexport default Joke;\\n\\n
In the code above, we used the useEffect
hook to fetch a random joke, so whenever we reload the page, we get a new one.
Using use()
, we can completely get rid of useEffect()
and the isloading
state, as we will be using a suspense boundary for the loading state. This is how our code will look:
import { use, Suspense } from \\"react\\";\\n\\nconst fetchData = async () => {\\n const res = await fetch(\\"https://api.chucknorris.io/jokes/random\\");\\n return res.json();\\n};\\n\\nlet jokePromise = fetchData();\\n\\nconst RandomJoke = () => {\\n const joke = use(jokePromise);\\n return (\\n <div className=\\"bg-gradient-to-br from-orange-100 to-blue-100 rounded-2xl shadow-lg p-8 transition-all duration-300\\">\\n <p className=\\"text-xl text-gray-800 font-medium leading-relaxed\\">\\n {joke.value}\\n </p>\\n </div>\\n );\\n};\\n\\nconst Joke = () => {\\n const refreshPage = () => {\\n window.location.reload();\\n };\\n\\n return (\\n <>\\n <div className=\\"min-h-[300px] flex flex-col items-center justify-center p-6\\">\\n <div className=\\"w-full max-w-2xl\\">\\n <Suspense\\n fallback={\\n <h2 className=\\"text-2xl text-center font-bold mt-5\\">\\n Loading...\\n </h2>\\n }\\n >\\n <RandomJoke />\\n </Suspense>\\n <button\\n onClick={refreshPage}\\n className=\\"mt-8 w-full bg-gradient-to-r from-orange-500 to-orange-600 hover:from-orange-600 hover:to-orange-700 text-white font-bold py-3 px-6 rounded-xl shadow-lg transform transition-all duration-200 hover:scale-[1.02] active:scale-[0.98] flex items-center justify-center gap-2\\"\\n >\\n Refresh to Get New Joke\\n </button>\\n </div>\\n </div>\\n </>\\n );\\n};\\n\\nexport default Joke;\\n\\n
We used the use()
hook to unwrap the promise returned by the fetchData
function and access the resolved joke data directly within the RandomJoke
component.
This is a more straightforward way of handling asynchronous data fetching in React, and this should be the result:
\\nHilarious, right?
\\nuse()
hook replacing useContext()
hookBefore use()
, if I wanted to implement a dark and light mode theme, I would use the useContext()
hook to manage and provide a theme state in the component.
I would also have to create a ThemeContext
which would hold the current theme and a toggle function, and then the app would be wrapped in a ThemeProvider
to be able to access the context in a component.
For example a component like ThemedCard
will call useContext(ThemeContext)
to first access them and adjust style based on a user’s interactions:
import { createContext, useState, useContext } from \\"react\\";\\n\\n// Create a context object\\nconst ThemeContext = createContext();\\n\\n// Create a provider component\\nconst ThemeProvider = ({ children }) => {\\n // State to hold the current theme\\n const [theme, setTheme] = useState(\\"light\\");\\n\\n // Function to toggle theme\\n const toggleTheme = () => {\\n setTheme((prevTheme) => (prevTheme === \\"light\\" ? \\"dark\\" : \\"light\\"));\\n };\\n\\n return (\\n // Provide the theme and toggleTheme function to the children\\n <ThemeContext.Provider value={{ theme, toggleTheme }}>\\n {children}\\n </ThemeContext.Provider>\\n );\\n};\\n\\nconst ThemedCard = () => {\\n // Access the theme context using the useContext hook\\n const { theme, toggleTheme } = useContext(ThemeContext);\\n\\n return (\\n <div className=\\"flex items-center justify-center min-h-screen bg-gray-100 dark:bg-gray-900\\">\\n <div\\n className={`max-w-md mx-auto shadow-md rounded-lg p-6 transition-colors duration-200 ${\\n theme === \\"light\\"\\n ? \\"bg-white text-gray-800\\"\\n : \\"bg-gray-800 text-white\\"\\n }`}\\n >\\n <h1 className=\\"text-2xl font-bold mb-3\\">Saying Goodbye to UseContext()</h1>\\n <p\\n className={`${theme === \\"light\\" ? \\"text-gray-600\\" : \\"text-gray-300\\"}`}\\n >\\n The use() hook will enable us to say goodbye to the useContext() hook\\n and could potentially replace the useEffect() hook in a few cases.\\n This hook lets us read and asynchronously load a resource such as a\\n promise or a context. This can also be used in fetching data in some\\n cases and also in loops and conditionals, unlike other hooks.\\n </p>\\n {/* Toggle button */}\\n <button\\n onClick={toggleTheme}\\n className={`mt-4 px-4 py-2 rounded-md focus:outline-none focus:ring-2 focus:ring-opacity-50 transition-colors duration-200 ${\\n theme === \\"light\\"\\n ? \\"bg-gray-600 hover:bg-blue-600 text-white focus:ring-blue-500\\"\\n : \\"bg-yellow-400 hover:bg-yellow-500 text-gray-900 focus:ring-yellow-500\\"\\n }`}\\n >\\n {theme === \\"light\\" ? \\"Switch to Dark Mode\\" : \\"Switch to Light Mode\\"}\\n </button>\\n </div>\\n </div>\\n );\\n};\\n\\nconst Theme = () => {\\n return (\\n <ThemeProvider>\\n <ThemedCard />\\n </ThemeProvider>\\n );\\n};\\n\\nexport default Theme;\\n\\n
With use()
, you will do the same thing but this time you will only replace UseContext()
with the use()
hook: \\\\
// Replace useContext() hook \\nimport { createContext, useState, useContext } from \\"react\\";\\n\\n// Access the theme context directly using the use() hook\\nconst { theme, toggleTheme } = use(ThemeContext);\\n\\n
Dark theme:
\\n\\n
Light theme:
\\n\\n
For the action, we will be creating a post as an example. We will have a post form that will enable us to submit a simple update about either a book we’ve read, how our vacation went, or, my personal favorite, a rant about bugs interfering with my work. Below is what we want to achieve using Actions:
\\nHere is how Action was used to achieve this:
\\n// PostForm component\\nconst PostForm = () => {\\n const formAction = async (formData) => {\\n const newPost = {\\n title: formData.get(\\"title\\"),\\n body: formData.get(\\"body\\"),\\n };\\n console.log(newPost);\\n };\\n\\n return (\\n <form\\n action={formAction}\\n className=\\"bg-white shadow-xl rounded-2xl px-8 pt-6 pb-8 mb-8 transition-all duration-300 hover:shadow-2xl\\"\\n >\\n <h2 className=\\"text-3xl font-bold text-indigo-800 mb-6 text-center\\">\\n Create New Post\\n </h2>\\n <div className=\\"mb-6\\">\\n <label\\n className=\\"block text-gray-700 text-sm font-semibold mb-2\\"\\n htmlFor=\\"title\\"\\n >\\n Title\\n </label>\\n <input\\n className=\\"shadow-inner appearance-none border-2 border-indigo-200 rounded-lg w-full py-3 px-4 text-gray-700 leading-tight focus:outline-none focus:border-indigo-500 transition-all duration-300\\"\\n id=\\"title\\"\\n type=\\"text\\"\\n placeholder=\\"Enter an engaging title\\"\\n name=\\"title\\"\\n />\\n </div>\\n <div className=\\"mb-6\\">\\n <label\\n className=\\"block text-gray-700 text-sm font-semibold mb-2\\"\\n htmlFor=\\"body\\"\\n >\\n Body\\n </label>\\n <textarea\\n className=\\"shadow-inner appearance-none border-2 border-indigo-200 rounded-lg w-full py-3 px-4 text-gray-700 leading-tight focus:outline-none focus:border-indigo-500 transition-all duration-300\\"\\n id=\\"body\\"\\n rows=\\"5\\"\\n placeholder=\\"Share your thoughts...\\"\\n name=\\"body\\"\\n ></textarea>\\n </div>\\n <div className=\\"flex items-center justify-end\\">\\n <button\\n className=\\"bg-gradient-to-r from-indigo-500 to-purple-600 hover:from-indigo-600 hover:to-purple-700 text-white font-bold py-3 px-6 rounded-full focus:outline-none focus:shadow-outline transition-all duration-300 flex items-center\\"\\n type=\\"submit\\"\\n >\\n <PlusIcon className=\\"mr-2 h-5 w-5\\" />\\n Create Post\\n </button>\\n </div>\\n </form>\\n );\\n};\\n\\nexport default PostForm;\\n\\n
If you’ve ever used PHP, using action in the form is very similar, so we are kind of taking things back to our roots. 😀 We have a simple form, and we attach action to it. We can call it whatever we want, and in my case, it is formAction
.
We go ahead to create the formAction
function. Well since this is an action we will have access to formData
. We create an object called newPost
and set that to an object with a title and body which we do have access to with a get method. Now, if we console log this, newPost
, we should be able to get the imputed values which are the title and the post:
\\n
\\n
\\n
And that’s pretty much it! I didn’t have to create an onClick
and add an event handler
; I just added an action. Below is the rest of the code:
import { useState } from \\"react\\";\\nimport { PlusIcon, SendIcon } from \\"lucide-react\\";\\n\\n// PostItem component\\nconst PostItem = ({ post }) => {\\n return (\\n <div className=\\"bg-gradient-to-r from-purple-100 to-indigo-100 shadow-lg p-6 my-8 rounded-xl transition-all duration-300 hover:shadow-xl hover:scale-105\\">\\n <h2 className=\\"text-2xl font-extrabold text-indigo-800 mb-3\\">\\n {post.title}\\n </h2>\\n <p className=\\"text-gray-700 leading-relaxed\\">{post.body}</p>\\n </div>\\n );\\n};\\n\\n// PostForm component\\nconst PostForm = ({ addPost }) => {\\n const formAction = async (formData) => {\\n const newPost = {\\n title: formData.get(\\"title\\"),\\n body: formData.get(\\"body\\"),\\n };\\n addPost(newPost);\\n };\\n\\n return (\\n <form\\n action={formAction}\\n className=\\"bg-white shadow-xl rounded-2xl px-8 pt-6 pb-8 mb-8 transition-all duration-300 hover:shadow-2xl\\"\\n >\\n <h2 className=\\"text-3xl font-bold text-indigo-800 mb-6 text-center\\">\\n Create New Post\\n </h2>\\n <div className=\\"mb-6\\">\\n <label\\n className=\\"block text-gray-700 text-sm font-semibold mb-2\\"\\n htmlFor=\\"title\\"\\n >\\n Title\\n </label>\\n <input\\n className=\\"shadow-inner appearance-none border-2 border-indigo-200 rounded-lg w-full py-3 px-4 text-gray-700 leading-tight focus:outline-none focus:border-indigo-500 transition-all duration-300\\"\\n id=\\"title\\"\\n type=\\"text\\"\\n placeholder=\\"Enter an engaging title\\"\\n name=\\"title\\"\\n />\\n </div>\\n <div className=\\"mb-6\\">\\n <label\\n className=\\"block text-gray-700 text-sm font-semibold mb-2\\"\\n htmlFor=\\"body\\"\\n >\\n Body\\n </label>\\n <textarea\\n className=\\"shadow-inner appearance-none border-2 border-indigo-200 rounded-lg w-full py-3 px-4 text-gray-700 leading-tight focus:outline-none focus:border-indigo-500 transition-all duration-300\\"\\n id=\\"body\\"\\n rows=\\"5\\"\\n placeholder=\\"Share your thoughts...\\"\\n name=\\"body\\"\\n ></textarea>\\n </div>\\n <div className=\\"flex items-center justify-end\\">\\n <button\\n className=\\"bg-gradient-to-r from-indigo-500 to-purple-600 hover:from-indigo-600 hover:to-purple-700 text-white font-bold py-3 px-6 rounded-full focus:outline-none focus:shadow-outline transition-all duration-300 flex items-center\\"\\n type=\\"submit\\"\\n >\\n <PlusIcon className=\\"mr-2 h-5 w-5\\" />\\n Create Post\\n </button>\\n </div>\\n </form>\\n );\\n};\\n\\n// Posts component\\nconst Posts = () => {\\n const [posts, setPosts] = useState([]);\\n\\n const addPost = (newPost) => {\\n setPosts((posts) => [...posts, newPost]);\\n };\\n\\n return (\\n <div className=\\"container mx-auto px-4 py-8 max-w-4xl\\">\\n <h1 className=\\"text-4xl font-extrabold text-center text-indigo-900 mb-12\\">\\n Logrocket Blog\\n </h1>\\n <PostForm addPost={addPost} />\\n {posts.length > 0 ? (\\n posts.map((post, index) => <PostItem key={index} post={post} />)\\n ) : (\\n <div className=\\"text-center text-gray-500 mt-12\\">\\n <p className=\\"text-xl font-semibold mb-4\\">No posts yet</p>\\n <p>Be the first to create a post!</p>\\n </div>\\n )}\\n </div>\\n );\\n};\\n\\nexport default Posts;\\n\\n
useFormStatus()
hookThe above form works, but we can go a step further with the useFormStatus
. This way, we can have our submit button say disabled or do whatever we want while the form is actually submitting.
Two things to keep in mind; The first is that this hook only returns status information for a parent form and not for any form rendered in the same component. The second thing to note is that this hook is imported from React-Dom and not React.
\\nIn our form above we will pull the button into a separate component called SubmitFormButton()
, and what we will do is get the pending state, from useFormStatus
which will be true or false. Then we write our logic while it’s pending.
Our logic could be something as easy as saying, “If pending, display Creating post, else display Create post” and we can add a little delay so we see our changes. Let’s see how it looks in our code.
\\nSubmit component:
\\n// SubmitButton component\\nconst SubmitButton = () => {\\n const { pending } = useFormStatus();\\n console.log(pending);\\n\\n return (\\n <button\\n className=\\"bg-gradient-to-r from-indigo-500 to-purple-600 hover:from-indigo-600 hover:to-purple-700 text-white font-bold py-3 px-6 rounded-full focus:outline-none focus:shadow-outline transition-all duration-300 flex items-center\\"\\n type=\\"submit\\"\\n disabled={pending}\\n >\\n <PlusIcon className=\\"mr-2 h-5 w-5\\" />\\n {pending ? \\"Creating Post...\\" : \\"Create Post\\"}\\n </button>\\n );\\n};\\n\\n
Stimulating a delay in our form submission:
\\n// PostForm component\\nconst PostForm = ({ addPost }) => {\\n const formAction = async (formData) => {\\n // Simulate a delay of 3 seconds\\n await new Promise((resolve) => setTimeout(resolve, 3000));\\n const newPost = {\\n title: formData.get(\\"title\\"),\\n body: formData.get(\\"body\\"),\\n };\\n addPost(newPost);\\n };\\n\\n
We go ahead and render the PostForm
component, and we should have this 👍
\\n
The button is also disabled at the same time so you can’t keep clicking it until the post is created.
\\nuseActionState
hookWe can refactor our code using the useActionState()
hook. What the useActionState
hook does is combine the form submission logic, state management, and loading state into one unit.
Doing so automatically handles the pending state during a form submission, allowing us to easily disable the submit button like the useFormStatus
hook, show a loading message, and display either a success or error message.
Unlike the useFormStatus
, the useActionState
will be imported from React, and this is how it is used: \\\\
const [state, formAction, isPending] = useActionState(\\n async (prevState, formData) => {\\n // Simulate a delay of 3 seconds\\n await new Promise((resolve) => setTimeout(resolve, 3000));\\n const title = formData.get(\\"title\\");\\n const body = formData.get(\\"body\\");\\n\\n if (!title || !body) {\\n return { success: false, message: \\"Please fill in all fields.\\" };\\n }\\n\\n const newPost = { title, body };\\n addPost(newPost);\\n return { success: true, message: \\"Post created successfully!\\" };\\n }\\n );\\n\\n
In the code above useActionState
is used to handle our submission where it extracts the title and body using the formData
API, and then it validates the inputs and returns a success and error state. This is how it looks:
The major thing React 19 offers is helping developers build a lot faster websites, and I am glad to have been able to play my part in introducing this to you. Feel free to ask questions below and also give your two cents on this new version.
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nasync/await
in TypeScript\\n then
\\n async/await
\\n try/catch
\\n Promise.all
\\n \\n Awaited
type\\n Editor’s note: This article was last reviewed and updated by Ikeh Akinyemi in January 2025 to introduce advanced techniques for working with async/await
, such as handling multiple async operations concurrently using Promise.all
and managing async iterations with for await...of
, as well as how to apply async/await within higher-order functions.
Asynchronous programming is a way of writing code that can carry out tasks independently of each other, not needing one task to be completed before another gets started. When you think of asynchronous programming, think of multitasking and effective time management.
\\nIf you’re reading this, you probably have some familiarity with asynchronous programming in JavaScript, and you may be wondering how it works in TypeScript. That’s what we’ll explore in this guide.
\\nBefore diving into async/await
, it’s important to mention that promises form the foundation of asynchronous programming in JavaScript/TypeScript. A promise represents a value that might not be immediately available but will be resolved at some point in the future. A promise can be in one of the following three states:
Here’s how to create and work with promises in TypeScript:
\\n// Type-safe Promise creation\\ninterface ApiResponse {\\n data: string;\\n timestamp: number;\\n}\\n\\nconst fetchData = new Promise<ApiResponse>((resolve, reject) => {\\n try {\\n // Simulating API call\\n setTimeout(() => {\\n resolve({\\n data: \\"Success!\\",\\n timestamp: Date.now()\\n });\\n }, 1000);\\n } catch (error) {\\n reject(error);\\n }\\n});\\n\\n
Promises can be chained using .then()
for successful operations and .catch()
for error handling:
fetchData\\n .then(response => {\\n console.log(response.data); // TypeScript knows response has ApiResponse type\\n return response.timestamp;\\n })\\n .then(timestamp => {\\n console.log(new Date(timestamp).toISOString());\\n })\\n .catch(error => {\\n console.error(\'Error:\', error);\\n });\\n\\n
We’ll revisit the concept of promises later, where we’ll discuss how to possibly execute asynchronous operations in parallel.
\\nasync/await
in TypeScriptTypeScript is a superset of JavaScript, so async/await
works the same, but with some extra goodies and type safety. TypeScript enables you to ensure type safety for the expected result and even check for type errors, which helps you detect bugs earlier in the development process.
async/await
is essentially a syntactic sugar for promises, which is to say that the async/await
keyword is a wrapper over promises. An async
function always returns a promise. Even if you omit the Promise
keyword, the compiler will wrap your function in an immediately resolved promise.
Here’s an example:
\\n//Snippet 1\\nconst myAsynFunction = async (url: string): Promise<T> => {\\n const { data } = await fetch(url)\\n return data\\n}\\n\\n//Snippet 2\\nconst immediatelyResolvedPromise = (url: string) => {\\n const resultPromise = new Promise((resolve, reject) => {\\n resolve(fetch(url))\\n })\\n return resultPromise\\n}\\n\\n
Although they look different, the code snippets above are more or less equivalent.
\\nasync/await
simply enables you to write the code more synchronously and unwraps the promise within the same line of code for you. This is powerful when you’re dealing with complex asynchronous patterns.
To get the most out of the async/await
syntax, you’ll need a basic understanding of promises.
As explained earlier, a promise refers to the expectation that something will happen at a particular time, enabling your app to use the result of that future event to perform certain other tasks.
\\nTo demonstrate what I mean, I’ll break down a real-world example and translate it into pseudocode, followed by the actual TypeScript code.
\\nLet’s say I have a lawn to mow. I contact a mowing company that promises to mow my lawn in a couple of hours. In turn, I promise to pay them immediately afterward, provided the lawn is properly mowed.
\\n\\nCan you spot the pattern? The first obvious thing to note is that the second event relies entirely on the previous one. If the first event’s promise is fulfilled, the next event’s will be executed. The promise in that event is then either fulfilled, rejected, or remains pending.
\\nLet’s look at this sequence step by step and then explore its code:
\\nBefore we write out the full code, it makes sense to examine the syntax for a promise — specifically, an example of a promise that resolves into a string.
\\nWe declared a promise
with the new + Promise
keyword, which takes in the resolve
and reject
arguments. Now let’s write a promise for the flow chart above:
// I send a request to the company. This is synchronous\\n// company replies with a promise\\nconst angelMowersPromise = new Promise<string>((resolve, reject) => {\\n // a resolved promise after certain hours\\n setTimeout(() => {\\n resolve(\'We finished mowing the lawn\')\\n }, 100000) // resolves after 100,000ms\\n reject(\\"We couldn\'t mow the lawn\\")\\n})\\n\\nconst myPaymentPromise = new Promise<Record<string, number | string>>((resolve, reject) => {\\n // a resolved promise with an object of 1000 Euro payment\\n // and a thank you message\\n setTimeout(() => {\\n resolve({\\n amount: 1000,\\n note: \'Thank You\',\\n })\\n }, 100000)\\n // reject with 0 Euro and an unstatisfatory note\\n reject({\\n amount: 0,\\n note: \'Sorry Lawn was not properly Mowed\',\\n })\\n})\\n\\n
In the code above, we declared both the company’s promises and our promises. The company promise is either resolved after 100,000ms or rejected. A Promise
is always in one of three states: resolved
if there is no error, rejected
if an error is encountered, or pending
if the Promise
has been neither rejected nor fulfilled. In our case, it falls within the 100000ms
period.
But how can we execute the task sequentially and synchronously? That’s where the then
keyword comes in. Without it, the functions simply run in the order they resolve.
then
Chaining promises allows them to run in sequence using the then
keyword. This functions like a normal human language — do this and then that and then that, and so on.
The code below will run the angelMowersPromise
. If there is no error, it’ll run the myPaymentPromise
. If there is an error in either of the two promises, it’ll be caught in the catch
block:
angelMowersPromise\\n .then(() => myPaymentPromise.then(res => console.log(res)))\\n .catch(error => console.log(error))\\n\\n
Now let’s look at a more technical example. A common task in frontend programming is to make network requests and respond to the results accordingly.
\\nBelow is a request to fetch a list of employees from a remote server:
\\nconst api = \'http://dummy.restapiexample.com/api/v1/employees\'\\n fetch(api)\\n .then(response => response.json())\\n .then(employees => employees.forEach(employee => console.log(employee.id)) // logs all employee id\\n .catch(error => console.log(error.message))) // logs any error from the promise\\n\\n
There may be times when you need numerous promises to execute in parallel or sequence. Constructs such as Promise.all
or Promise.race
are especially helpful in these scenarios.
For example, imagine that you need to fetch a list of 1,000 GitHub users, and then make an additional request with the ID to fetch avatars for each of them. You don’t necessarily want to wait for each user in the sequence; you just need all the fetched avatars. We’ll examine this in more detail later when we discuss Promise.all
.
Now that you have a fundamental grasp of promises, let’s look at the async/await
syntax.
async/await
The async/await
syntax simplifies working with promises in JavaScript. It provides an easy interface to read and write promises in a way that makes them appear synchronous.
An async/await
will always return a Promise
. Even if you omit the Promise
keyword, the compiler will wrap the function in an immediately resolved Promise
. This enables you to treat the return value of an async
function as a Promise
, which is useful when you need to resolve numerous asynchronous functions.
As the name implies, async
always goes hand in hand with await
. That is, you can only await
inside an async
function. The async
function informs the compiler that this is an asynchronous function.
If we convert the promises from above, the syntax looks like this:
\\nconst myAsync = async (): Promise<Record<string, number | string>> => {\\n await angelMowersPromise\\n const response = await myPaymentPromise\\n return response\\n}\\n\\n
As you can immediately see, this looks more readable and appears synchronous. We told the compiler to await the execution of angelMowersPromise
before doing anything else. Then, we return the response from the myPaymentPromise
.
You may have noticed that we omitted error handling. We could do this with the catch
block after the then
in a promise. But what happens if we encounter an error? That leads us to try/catch
.
try/catch
We’ll refer to the employee fetching example to see the error handling in action, as it is likely to encounter an error over a network request.
\\nLet’s say, for example, that the server is down, or perhaps we sent a malformed request. We need to pause execution to prevent our program from crashing. The syntax will look like this:
\\ninterface Employee {\\n id: number\\n employee_name: string\\n employee_salary: number\\n employee_age: number\\n profile_image: string\\n}\\n\\nconst fetchEmployees = async (): Promise<Array<Employee> | string> => {\\n const api = \'http://dummy.restapiexample.com/api/v1/employees\'\\n try {\\n const response = await fetch(api)\\n const { data } = await response.json()\\n return data\\n } catch (error) {\\n if (error) {\\n return error.message\\n }\\n }\\n}\\n\\n
We initiated the function as an async
function. We expect the return value to be either an array of employees or a string of error messages. Therefore, the type of promise is Promise<Array<Employee> | string>
.
Inside the try
block are the expressions we expect the function to run if there are no errors. Meanwhile, the catch
block captures any errors that arise. In that case, we’d just return the message
property of the error
object.
The beauty of this is that any error that first occurs within the try
block is thrown and caught in the catch
block. An uncaught exception can lead to hard-to-debug code or even break the entire program.
While traditional try/catch
blocks are effective for catching errors at the local level, they can become repetitive and clutter the main business logic when used too frequently. This is where higher-order functions come into play.
A higher-order function is a function that takes one or more functions as arguments or returns a function. In the context of error handling, a higher-order function can wrap an asynchronous function and handle any errors it might throw, thereby abstracting the try/catch
logic away from the core business logic.
The main idea behind using higher-order functions for error handling in async/await
is to create a wrapper function that takes an async function as an argument along with any parameters that the async function might need. Inside this wrapper, we implement a try/catch
block. This approach allows us to handle errors in a centralized manner, making the code cleaner and more maintainable.
Let’s refer to the employee fetching example:
\\n// Async function to fetch employee data\\nasync function fetchEmployees(apiUrl: string): Promise<Employee[]> {\\n const response = await fetch(apiUrl);\\n const data = await response.json();\\n return data;\\n}\\n\\n// Wrapped version of fetchEmployees using the higher-order function\\nconst safeFetchEmployees = (url: string) => handleAsyncErrors(fetchEmployees, url);\\n\\n// Example API URL\\nconst api = \'http://dummy.restapiexample.com/api/v1/employees\';\\n\\n// Using the wrapped function to fetch employees\\nsafeFetchEmployees(api)\\n .then(data => {\\n if (data) {\\n console.log(\\"Fetched employee data:\\", data);\\n } else {\\n console.log(\\"Failed to fetch employee data.\\");\\n }\\n })\\n .catch(err => {\\n // This catch block might be redundant, depending on your error handling strategy within the higher-order function\\n console.error(\\"Error in safeFetchEmployees:\\", err);\\n });\\n\\n
In this example, the safeFetchEmployees
function uses the handleAsyncErrors
higher-order function to wrap the original fetchEmployees
function.
This setup automatically handles any errors that might occur during the API call, logging them and returning null
to indicate an error state. The consumer of safeFetchEmployees
can then check if the returned value is null
to determine if the operation was successful or if an error occurred.
Promise.all
As mentioned earlier, there are times when we need promises to execute in parallel.
\\nLet’s look at an example from our employee API. Say we first need to fetch all employees, then fetch their names, and then generate an email from the names. Obviously, we’ll need to execute the functions in a synchronous manner and also in parallel so that one doesn’t block the other.
\\nIn this case, we would make use of Promise.all
. According to Mozilla, “Promise.all
is typically used after having started multiple asynchronous tasks to run concurrently and having created promises for their results so that one can wait for all the tasks being finished.”
In pseudocode, we’d have something like this:
\\n/employee
id
from each user. Fetch each user => /employee/{id}
const baseApi = \'https://reqres.in/api/users?page=1\'\\nconst userApi = \'https://reqres.in/api/user\'\\n\\nconst fetchAllEmployees = async (url: string): Promise<Employee[]> => {\\n const response = await fetch(url)\\n const { data } = await response.json()\\n return data\\n}\\n\\nconst fetchEmployee = async (url: string, id: number): Promise<Record<string, string>> => {\\n const response = await fetch(`${url}/${id}`)\\n const { data } = await response.json()\\n return data\\n}\\nconst generateEmail = (name: string): string => {\\n return `${name.split(\' \').join(\'.\')}@company.com`\\n}\\n\\nconst runAsyncFunctions = async () => {\\n try {\\n const employees = await fetchAllEmployees(baseApi)\\n Promise.all(\\n employees.map(async user => {\\n const userName = await fetchEmployee(userApi, user.id)\\n const emails = generateEmail(userName.name)\\n return emails\\n })\\n )\\n } catch (error) {\\n console.log(error)\\n }\\n}\\nrunAsyncFunctions()\\n
In the above code, fetchEmployees
fetches all the employees from the baseApi
. We await
the response, convert it to JSON
, and then return the converted data.
The most important concept to keep in mind is how we sequentially executed the code line by line inside the async
function with the await
keyword. We’d get an error if we tried to convert data to JSON that has not been fully awaited. The same concept applies to fetchEmployee
, except that we’d only fetch a single employee. The more interesting part is the runAsyncFunctions
, where we run all the async functions concurrently.
First, wrap all the methods within runAsyncFunctions
inside a try/catch
block. Next, await
the result of fetching all the employees. We need the id
of each employee to fetch their respective data, but what we ultimately need is information about the employees.
This is where we can call upon Promise.all
to handle all the Promises
concurrently. Each fetchEmployee
Promise
is executed concurrently for all the employees. The awaited data from the employees’ information is then used to generate an email for each employee with the generateEmail
function.
In the case of an error, it propagates as usual, from the failed promise to Promise.all
, and then becomes an exception we can catch inside the catch
block.
Promise.allSettled
Promise.all
is great when we need all promises to succeed, but real-world applications often need to handle situations where some operations might fail while others succeed. Let’s consider our employee management system: What if we need to update multiple employee records, but some updates might fail due to validation errors or network issues?
This is where Promise.allSettled
comes in handy. Unlike Promise.all
, which fails completely if any promise fails, Promise.allSettled
will wait for all promises to complete, regardless of whether they succeed or fail. It gives us information about both successful and failed operations.
Let’s enhance our employee management system to handle bulk updates:
\\ninterface UpdateResult {\\n id: number;\\n success: boolean;\\n message: string;\\n}\\n\\nconst updateEmployee = async (employee: Employee): Promise<UpdateResult> => {\\n const api = `${userApi}/${employee.id}`;\\n try {\\n const response = await fetch(api, {\\n method: \'PUT\',\\n body: JSON.stringify(employee),\\n headers: {\\n \'Content-Type\': \'application/json\'\\n }\\n });\\n const data = await response.json();\\n return {\\n id: employee.id,\\n success: true,\\n message: \'Update successful\'\\n };\\n } catch (error) {\\n return {\\n id: employee.id,\\n success: false,\\n message: error instanceof Error ? error.message : \'Update failed\'\\n };\\n }\\n};\\n\\nconst bulkUpdateEmployees = async (employees: Employee[]) => {\\n const updatePromises = employees.map(emp => updateEmployee(emp));\\n\\n const results = await Promise.allSettled(updatePromises);\\n\\n // Process results and generate a report\\n const summary = results.reduce((acc, result, index) => {\\n if (result.status === \'fulfilled\') {\\n acc.successful.push(result.value);\\n } else {\\n acc.failed.push({\\n id: employees[index].id,\\n error: result.reason\\n });\\n }\\n return acc;\\n }, {\\n successful: [] as UpdateResult[],\\n failed: [] as Array<{id: number; error: any}>\\n });\\n\\n return summary;\\n};\\n\\n
Think of Promise.allSettled
like a project manager tracking multiple tasks. Instead of stopping everything when one task fails (like Promise.all
would), the manager continues monitoring all tasks and provides a complete report of what succeeded and what failed. This is particularly useful when you need to:
for await...of
Sometimes we need to process large amounts of data that come in chunks or pages. Imagine you’re exporting employee data from a large enterprise system – there might be thousands of records that come in batches to prevent memory overload.
\\nThe for await...of
loop is perfect for this scenario. It allows us to process asynchronous data streams one item at a time, making our code both efficient and readable. Here’s how we can use it with our employee system:
interface PaginatedResponse<T> {\\n data: T[];\\n nextPage?: string;\\n}\\n\\nasync function* fetchAllPages<T>(\\n initialUrl: string,\\n fetchPage: (url: string) => Promise<PaginatedResponse<T>>\\n): AsyncIterableIterator<T> {\\n let currentUrl = initialUrl;\\n while (currentUrl) {\\n const response = await fetchPage(currentUrl);\\n\\n for (const item of response.data) {\\n yield item;\\n }\\n currentUrl = response.nextPage || \'\';\\n }\\n}\\n\\n// Usage with type safety\\nasync function processAllEmployee() {\\n const fetchPage = async (url: string): Promise<PaginatedResponse<Employee>> => {\\n const response = await fetch(url);\\n return response.json();\\n };\\n try {\\n for await (const employee of fetchAllPages(\'/api/employees\', fetchPage)) {\\n // Process each employee as they come in\\n console.log(`Processing employee: ${employee.employee_name}`);\\n await updateEmployeeAnalytics(employee);\\n }\\n } catch (error) {\\n console.error(\'Failed to process employees:\', error);\\n }\\n}\\nfunction updateEmployeeAnalytics(employee: Employee) { /** custom logic */}\\n\\n
Think of for await...of
like a conveyor belt in a factory. Instead of waiting for all products (data) to be manufactured before starting to pack them (process them), we can pack each product as it comes off the belt. This approach has several benefits:
Combining higher-order functions with async/await
creates powerful patterns for handling asynchronous operations.
When working with our employee management system, we often need to process arrays of data asynchronously. Let’s see how we can effectively use array methods with async/await
:
// Async filter: Keep only active employees\\nasync function filterActiveEmployees(employees: Employee[]) {\\n const checkResults = await Promise.all(\\n employees.map(async (employee) => {\\n const status = await checkEmployeeStatus(employee.id);\\n return { employee, isActive: status === \'active\' };\\n })\\n );\\n\\n return checkResults\\n .filter(result => result.isActive)\\n .map(result => result.employee);\\n}\\n\\n// Async reduce: Calculate total department salary\\nasync function calculateDepartmentSalary(employeeIds: number[]) {\\n return await employeeIds.reduce(async (promisedTotal, id) => {\\n const total = await promisedTotal;\\n const employee = await fetchEmployeeDetails(id);\\n return total + employee.salary;\\n }, Promise.resolve(0)); // Initial value must be a Promise\\n}\\n\\n
When working with these array methods, there are some important considerations:
\\nmap
with async operations returns an array of promises that need to be handled with Promise.all
filter
requires special handling because we can’t directly use the promise result as a filter conditionreduce
with async operations needs careful promise handling for the accumulatorThere are use cases where utility functions are needed to carry out some operations on responses returned from asynchronous calls. We can create reusable higher-order functions that wrap async operations with these additional functionalities:
\\n// Higher-order function for caching async results\\nfunction withCache<T>(\\n asyncFn: (id: number) => Promise<T>,\\n ttlMs: number = 5000\\n) {\\n const cache = new Map<number, { data: T; timestamp: number }>();\\n\\n return async (id: number): Promise<T> => {\\n const cached = cache.get(id);\\n const now = Date.now();\\n\\n if (cached && now - cached.timestamp < ttlMs) {\\n return cached.data;\\n }\\n\\n const data = await asyncFn(id);\\n cache.set(id, { data, timestamp: now });\\n return data;\\n };\\n}\\n\\n// Usage example\\nconst cachedFetchEmployee = withCache(async (id: number) => {\\n const response = await fetch(`${baseApi}/employee/${id}`);\\n return response.json();\\n});\\n\\n
In this above snippet, the withCache
higher-order function adds caching capability to any async function that fetches data by ID. If the same ID is requested multiple times within five seconds (the default TTL), the function returns the cached result instead of making another API call. This significantly reduces unnecessary network requests when the same employee data is needed multiple times in quick succession.
Awaited
typeAwaited
is a utility type that models operations like await
in async
functions. It unwraps the resolved value of a promise, discarding the promise itself, and works recursively, thereby removing any nested promise layers.
Awaited
is the type of value that you expect to get after awaiting a promise. It helps your code understand that once you use await
, you’re not dealing with a promise anymore, but with the actual data you wanted.
Here’s the basic syntax:
\\ntype MyPromise = Promise<string>;\\ntype AwaitedType = Awaited<MyPromise>; // AwaitedType will be \'string\'\\n\\n
The Awaited
type does not exactly model the then
method in promises, however Awaited
can be relevant when using then
in async
functions. If you use await
inside a then
callback, Awaited
helps infer the type of the awaited value, avoiding the need for additional type annotations.
Awaited
can help clarify the type of data
and awaitedValue
in async
functions, even when using then
for promise chaining. However, it doesn’t replace the functionality of then
itself.
async
and await
enable us to write asynchronous code in a way that looks and behaves like synchronous code. This makes the code much easier to read, write, and understand.
Here are some key concepts to keep in mind as you’re working on your next asynchronous project in TypeScript:
\\n\\nawait
only works inside an async
functionasync
keyword always returns a Promise
async
doesn’t return a Promise
, it will be wrapped in an immediately resolved Promise
await
keyword is encountered until a Promise
is completedawait
will either return a result from a fulfilled Promise
or throw an exception from a rejected Promise
Hey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nEditor’s note: This article was updated on 4 December 2024 by Emmanuel John to address the changes to performance and error handling in Go Version 1.23.
\\nDeep linking and universal links are the gateways to your application. Deep linking is already part of a seamless experience that any mobile application should have.
\\nUltimately, they help to reduce churn and increase the loyalty of your user base. Implementing them correctly will directly impact your ability to master campaigns and run promotions within your application.
\\nThe deep linking question is as important today as ever before, specifically taking into consideration Identifier for Advertisers (IDFA) and a rising number of walled gardens. Well-executed deep linking will enable your retargeting campaigns and bring engagement to a new level, allowing end users to have a seamless one-click experience between the web and your application.
\\nOnce users have already discovered and installed your application, deep linking is the perfect tool to retain those newly acquired users.
\\nIn this article, I outline existing ways to implement deep linking and how to test it using the React Native Typescript codebase. You can find the full source code for this project available on GitHub.
\\nIn a nutshell, deep linking is a way to redirect users from a webpage into your application in order to show a specific screen with the requested content. It can be a product, an article, secure content behind a paywall, or a login screen.
\\nOne of the most famous examples is the Slack link that they send to your email. This link opens right inside the application, authorizing you to use it with your email account — no account setup needed.
\\nDeep linking is paramount in 2021. Every effort to lead your users into the app and improve their engagement heavily depends on the strategy on top of the deep linking.
\\nTo summarize the main points why deep linking is important:
\\nImplementing deep linking requires a more intricate understanding of iOS and Android for extra configuration of each platform in your React Native project.
\\nTake, for example, this syntax diagram of the following URL:
\\nbilling-app://billing/1\\n
\\nWhenever you navigate to a website using, for example, https://reactivelions.com, you use a URL in which the URL scheme is “https”. In the example above, billing-app
is a URL scheme for your deep linking URL.
Starting with iOS 9, Apple introduced universal links to reduce confusion and simplify the user experience. Apple has also introduced support for universal links on macOS with macOS 10.15.
\\nThe idea behind universal links is to connect specific website URLs that match content on your website with content inside your application. Another thing to note: the Apple dev team recommends migrating from custom URL schemes to universal links. This is because custom schemes are less secure and vulnerable to exploitation.
\\nUniversal links establish a secure connection between your app and website. In Xcode, you enable your app’s entitlement to handle specific domains, while your web server hosts a JSON file detailing the app’s accessible content. This mutual verification prevents others from misusing your app’s links.
\\nThis URL would act the same way as the deep linking URL I have shown in the previous section:
\\nhttps://app.reactivelions.com/billing/3\\n\\n
Configuring universal links requires extra steps on both the server side and mobile side.
\\nFirst, you start on the server side, where you need to upload a JSON-formatted file that defines the website’s association with a mobile application and its specific routes.
\\n\\nLet’s say you run the example.com domain and want to create an association file. Start by creating a folder or a route in your root domain .well-known
, then add JSON content inside the apple-app-site-association
:
https://example.com/.well-known/apple-app-site-association\\n\\n
Add JSON content to define website associations:
\\n{\\n \\"applinks\\": {\\n \\"apps\\": [],\\n \\"details\\": [\\n {\\n \\"appID\\": \\"ABCD1234.com.your.app\\",\\n \\"paths\\": [ \\"/billing/\\", \\"/billing/*\\"]\\n },\\n {\\n \\"appID\\": \\"ABCD1234.com.your.app\\",\\n \\"paths\\": [ \\"*\\" ]\\n }\\n ]\\n }\\n}\\n\\n
Check your Apple developer portal to confirm your appID.
\\nYour web server must have a valid HTTPS certificate, as HTTP is insecure and cannot confirm the link between your app and website. The HTTPS certificate’s root must be recognized by the operating system, as custom root certificates aren’t supported.
\\nIf your app is targeting iOS 13 or macOS 10.15 and later, the “apps” key is no longer necessary and can be removed. However, if you’re supporting iOS 12, tvOS 12, or earlier versions, you’ll still need to keep the “apps” key included.
\\nIf you have multiple apps with the universal links configuration, and you do not want to repeat the relevant JSON you can use this:
\\n{\\n \\"applinks\\": {\\n \\"apps\\": [],\\n \\"details\\": [\\n {\\n \\"appIDs\\": [\\"ABCD1234.com.your.app\\", \\"ABCD1234.com.your.app2\\"],\\n \\"paths\\": [ \\"/billing/\\", \\"/billing/*\\"]\\n },\\n ]\\n }\\n}\\n\\n
Use this if you are targeting iOS 13, or macOS 10.15 and later. But if you need to support earlier releases, you should stick to using the singular appID key for each app.
\\nThe paths key uses terminal-style pattern matching for URLs, where * represents multiple characters and ? matches one. Early this year, the paths key was replaced with components, which uses an array of dictionaries for URL component pattern matching. Components include the path (marked by /), fragment (marked by #), and query (marked by ?).
\\n{\\n \\"applinks\\": {\\n \\"apps\\": [],\\n \\"details\\": [\\n {\\n \\"appIDs\\": [\\"ABCD1234.com.your.app\\", \\"ABCD1234.com.your.app2\\"],\\n \\"components\\": [ \\n \\"/\\": \\"/path/*/filename\\",\\n \\"#\\": \\"*fragment\\",\\n \\"?\\": \\"widget=?*\\"\\n ]\\n },\\n ]\\n }\\n}\\n\\n
Older versions like iOS 12, tvOS 12, and earlier macOS versions still use the paths key, but newer versions will ignore it if components are present.
\\nIf parts of your website aren’t intended to be represented in the app, you can exclude these sections using the exclude key with true as its value.
\\n\\n\\"components\\": [ \\n \\"/\\": \\"/path/*/filename\\",\\n \\"#\\": \\"*fragment\\",\\n \\"?\\": \\"widget=?*\\",\\n \\"exclude\\": true\\n]\\n\\n
This functions similarly to the not keyword in the old paths key, though not isn’t compatible with the new components dictionary.
\\nURL pattern matching demo
\\nHere’s how you can handle URL pattern matching for a meal ordering app using JSON to define component patterns in Universal Links:
{\\n \\"components\\": [\\n {\\n \\"/\\": \\"/*/order\\"\\n }\\n ]\\n}\\n\\n
This matches any path that has an arbitrary first component followed by /order.
\\nExample URLs:
\\nhttps://example.com/user/order
\\nhttps://example.com/product/order
{\\n \\"components\\": [\\n {\\n \\"/\\": \\"/taco\\",\\n \\"?\\": { \\"cheese\\": \\"*\\" }\\n }\\n ]\\n}\\n\\n
The * in the query will match any value for cheese.
\\nExample URLs:
\\nhttps://example.com/taco?cheese=cheddar
\\nhttps://example.com/taco?cheese=mozzarella
{\\n \\"components\\": [\\n {\\n \\"/\\": \\"/coupon\\",\\n \\"exclude\\": true,\\n \\"pattern\\": \\"/coupon/1*\\"\\n },\\n {\\n \\"/\\": \\"/coupon\\",\\n \\"pattern\\": \\"/coupon/*\\"\\n }\\n ]\\n}\\n\\n
Here, the first entry excludes codes starting with 1, while the second matches all other coupon codes.
\\nExample URLs:
\\nExcluded: https://example.com/coupon/1234
\\nMatched: https://example.com/coupon/5678
For the production-ready backend, you can test if your website is properly configured for Universal Links using the aasa-validator tool.
\\nTo demonstrate how deep linking works, we’ll build a simple test application. This application will have straightforward navigation between the Home
and Billing
screens using the @react-navigation
component:
npx react-native init BillingApp --template\\n\\n
Open your Xcode workspace:
\\nopen BillingApp/ios/BillingApp.xcworkspace\\n
\\nIn your Xcode window, select your newly created project in the left pane (in our case it’s BillingApp). Next, select the BillingApp target inside the newly opened left pane of the internal view for the BillingApp.xcodeproj
.
Navigate to the Info section in the top center of that view, then go to the very bottom and click the plus (+) sign under URL Types. Make sure to add billing-id as your new Identifier and specify URL Schemes as billing-app.
\\nBy following these steps above, you’ve enabled iOS project configuration to use deep links like billing-app://billing/4
inside your Objective C and JavaScript code later on.
After configuring Xcode, the next step will be focused on React Native. I will start with linking part of the React Native core called LinkingIOS
. You can read more about it in the official documentation here.
Its main goal is to construct a bridge that will enable a JavaScript thread to receive updates from the native part of your application, which you can read more about in the AppDelegate.m
part below.
Go to ios/Podfile and add this line under target:
\\npod \'React-RCTLinking\', :path => \'../node_modules/react-native/Libraries/LinkingIOS\'\\n\\n
And then make sure to update your pods using this command:
\\ncd ios && pod install\\n\\n
The next step is to enable the main entry points of your application to have control over the callbacks that are being called when the application gets opened via deep linking.
\\nIn this case, we implement the function openURL
with options and pass its context to RCTLinkingManager
via its native module called RCTLinkingManager
.
#import <React/RCTLinkingManager.h>\\n\\n- (BOOL)application:(UIApplication *)application\\nopenURL:(NSURL *)url\\noptions:(NSDictionary<UIApplicationOpenURLOptionsKey,id> *)options\\n{\\nreturn [RCTLinkingManager application:application openURL:url options:options];\\n}\\n\\n
If you’re targeting iOS 8.x or older, you can use the following code instead:
\\n#import <React/RCTLinkingManager.h>\\n\\n- (BOOL)application:(UIApplication *)application openURL:(NSURL *)url\\n sourceApplication:(NSString *)sourceApplication annotation:(id)annotation\\n{\\n return [RCTLinkingManager application:application openURL:url\\n sourceApplication:sourceApplication annotation:annotation];\\n}\\n\\n
For the universal links, we will need to implement a callback function continueUserActivity
, which will also pass in the context of the app and current universal link into the JavaScript context via RCTLinkingManager
.
- (BOOL)application:(UIApplication *)application continueUserActivity:(nonnull NSUserActivity *)userActivity\\nrestorationHandler:(nonnull void (^)(NSArray<id<UIUserActivityRestoring>> * _Nullable))restorationHandler\\n{\\nreturn [RCTLinkingManager application:application\\ncontinueUserActivity:userActivity\\nrestorationHandler:restorationHandler];\\n}\\n\\n
Android deep linking works slightly differently in comparison to iOS. This configuration operates on top of Android Intents, an abstraction of an operation to be performed. Most of the configuration is stored under AndroidManifest.xml and works by actually pointing to which Intent will be opened when the deep link is executed.
\\nInside your Android manifest android/app/src/main/AndroidManifest.xml
we need to do the following:
Intent
filterView
action and specify two main categories: DEFAULT
and BROWSABLE
billing-app
and defining the main route as billing
This way Android will know that this app has deep linking configured for this route billing-app://billing/*
:
<intent-filter android:label=\\"filter_react_native\\">\\n <action android:name=\\"android.intent.action.VIEW\\" />\\n <category android:name=\\"android.intent.category.DEFAULT\\" />\\n <category android:name=\\"android.intent.category.BROWSABLE\\" />\\n <data android:scheme=\\"billing-app\\" android:host=\\"billing\\" />\\n</intent-filter>\\n
In most production-grade applications you’ll end up having multiple screens. You’re most likely to end up using some form of component that implements this navigation for you. However, you can opt out and use deep linking without navigation context by invoking React Native’s core library via JavaScript by calling Linking
directly.
You can do this inside your React Native code using these two methods:
\\nLinking.addEventListener(\'url\', ({url}) => {})\\n
Linking.getInitialURL()\\n
Use the acquired deep linking URL to show different content, based on the logic of your application.
\\nIf you’re using @react-navigation
you can opt-in to configure deep linking using its routing logic.
For this, you need to define your prefixes
for both universal linking and deep linking. You will also need to define config
with its screens
, including nested screens if your application has many screens and is very complex.
Here’s an example of how this configuration looks for our application:
\\nimport { NavigationContainer } from \'@react-navigation/native\';\\nexport const config = {\\n screens: {\\n Home: {\\n path: \'home/:id?\',\\n parse: {\\n id: (id: String) => `${id}`,\\n },\\n },\\n Billing: {\\n path: \'billing/:id?\',\\n parse: {\\n id: (id: String) => `${id}`,\\n },\\n },\\n },\\n};\\nconst linking = {\\n prefixes: [\'https://app.reactivelions.com\', \'billing-app://home\'],\\n config,\\n};\\nfunction App() {\\n return (\\n <NavigationContainer linking={linking} fallback={<Text>Loading...</Text>}>\\n {/* content */}\\n </NavigationContainer>\\n );\\n}\\n
In the code section above, we introduced universal linking and walked through the steps needed to define universal link association on your website’s server end. In Android, there’s something similar called Verified Android App Links.
\\nYou can also check out the react-navigation documentation for more details on configuring links.
\\nUsing Android App Links helps you avoid the confusion of opening deep links with other applications that aren’t yours. Android usually suggests using a browser to open unverified deep links whenever it’s unsure if they’re App Links (and not deep links).
\\nTo enable App Links verification, you will need to change the intent declaration in your manifest file like so:
\\n<intent-filter android:autoVerify=\\"true\\">\\n\\n
To create app-verified links you will need to generate a JSON verification file that will be placed in the same .well-known
folder as in the Xcode section:
keytool -list -v -keystore my-release-key.keystore\\n
This command will generate an association with your domain by signing the configuration with your keystore file:
\\n[{\\n \\"relation\\": [\\"delegate_permission/common.handle_all_urls\\"],\\n \\"target\\": {\\n \\"namespace\\": \\"android_app\\",\\n \\"package_name\\": \\"com.mycompany.app1\\",\\n \\"sha256_cert_fingerprints\\":\\n [\\"14:6D:E9:83:C5:73:06:50:D8:EE:B9:95:2F:34:FC:64:16:A0:83:42:E6:1D:BE:A8:8A:04:96:B2:3F:CF:44:E5\\"]\\n }\\n}]\\n
Then place the generated file on your website using this path:
\\nhttps://www.example.com/.well-known/assetlinks.json
After going through all configurations and implementations, you want to ensure you’ve set everything up correctly and that deep links work on each platform of your choice.
\\nBefore you test universal links or Android App Verified Links, make sure that all JSON files are uploaded, available, and up to date for each of your domains. Depending on your web infrastructure, you might even want to refresh your Content Delivery Network (CDN) cache.
\\nA successful deep linking test means that, after opening a deep link in the browser, you are forwarded to your application and you can see the desired screen with the given content.
\\nWhen you go to the billing screen you can specify a number, and the application will render the same number of emojis with flying dollar banknotes. Our application has Home
and Billing
screens.
If you try to go to the Billing
screen from your Home
screen, it won’t pass any content, and therefore it will not render any emojis.
In your terminal, you can use these commands to test deep linking for each platform. Play around by changing the number at the end of your deep linking URL to see different numbers of emojis.
\\nnpx uri-scheme open billing-app://billing/5 --ios\\n
You can also open Safari and enter billing-app://billing/5
in your address bar, then click go.
npx uri-scheme open billing-app://billing/5 --android\\n
You might have noticed that I used TypeScript to write the code for this project. For this project, I’ve implemented custom property types that require custom declarations for each screen. Check props.ts to see these type declarations.
\\nAs I mentioned earlier, if you’re building a production-grade application, you’re most likely to end up building complex routing and will need nesting routes to be implemented with your navigator library.
\\nNesting navigation will enable you to decompose each screen into smaller components and have sub-routes based on your business logic. Learn more about building nesting routes using @react-navigation
here.
Looking forward to seeing what you build with this!
\\n\\n\\n\\nWould you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\n<dialog>
element\\n HTMLDialogElement
API\\n <dialog>
element\\n :modal
pseudo-class\\n ::backdrop
pseudo-element\\n <dialog>
element\\n Modal
\\n Modal
\\n Modal
component\\n NewsletterModal
component\\n NewsletterModal
component\\n Modal
component\\n NewsletterModal
\\n Editor’s note: This article was reviewed and updated by Rahul Chhodde in January 2025. The updates include revisions to code blocks, ensuring compatibility with React 19, and the removal of outdated practices, such as specifying the React.FC
type for functional components. It is now recommended to define prop types directly, allowing React to infer the return type based on these specifications.
A pop-up modal is a crucial UI element you can use to engage with users when you need to collect input or prompt them to take action. Advancements in frontend web development have made it easier than ever to incorporate modal dialogs into your apps.
\\nIn this article, we will focus on utilizing the native HTML5 <dialog>
element to construct a reusable pop-up modal component in React. You can check out the complete code for our React pop-up modal in this GitHub repository.
A modal dialog is a UI element that temporarily blocks a user from interacting with the rest of the application until a specific task is completed or canceled. It overlays the main interface and demands the user’s attention.
\\nA typical example of a modal is an email subscription box frequently found on blog websites. Until the user responds by subscribing or dismissing the modal, they cannot interact with the underlying content in the main interface. Other examples include login or signup dialogs, file upload boxes, file deletion confirmation prompts, and more.
\\nModals are useful for presenting critical alerts or obtaining important user input. However, they should be used sparingly to avoid disrupting the user experience unnecessarily.
\\nNon-modal dialogs, in contrast to modal dialogs, allow users to interact with the application while the dialog is open. They are less intrusive and do not demand immediate attention.
\\nSome examples of non-modal dialogs are site preference panels, help dialogs, cookie consent dialogs, context menus — the list goes on.
\\nThis article is primarily focused on modal dialogs.
\\n<dialog>
elementBefore the native HTML <dialog>
element was introduced, developers had to rely solely on JavaScript to add the modal functionality to HTML divs.
However, the native HTML <dialog>
element is now widely supported on modern browsers. Thanks to the JavaScript HTMLDialogElement
Web API — designed specifically for the <dialog>
element — modal dialogs have become more semantically coherent and easier to handle.
The native dialog element comes packed with accessibility features such as the dialog role and the modal aria attribute. These bits of info are read automatically by assistive technologies like screen readers, enhancing the overall accessibility of modals you create using <dialog>
.
This also means that you don’t necessarily need a third-party library to construct pop-up modals, and can use the native dialog element to do the same with fewer lines of code.
\\nHTMLDialogElement
APILet’s explore the essential markup for constructing a modal using the <dialog>
element:
<button id=\\"openModal\\">Open the modal</button>\\n\\n<dialog id=\\"modal\\" class=\\"modal\\">\\n <button id=\\"closeModal\\" class=\\"modal-close-btn\\">Close</button>\\n <p>...</p>\\n <!-- Add more elements as needed --\x3e\\n</dialog>\\n\\n
Note that the modal component can be set to open by default by including the open
attribute within the <dialog>
element in the markup:
<dialog open>\\n ...\\n</dialog>\\n\\n
We can now utilize the JavaScript HTMLDialogElement
API to control the visibility of the modal component that we previously defined. It’s a straightforward process that involves obtaining references to the modal itself along with the buttons responsible for opening and closing it.
By utilizing the showModal
and close
methods provided by the HTMLDialogElement
API, we can easily establish the necessary connections:
const modal = document.querySelector(\\"#modal\\");\\nconst openModal = document.querySelector(\\"#openModal\\");\\nconst closeModal = document.querySelector(\\"#closeModal\\");\\n\\nopenModal?.addEventListener(\'click\', () => {\\n modal?.showModal();\\n});\\n\\ncloseModal.addEventListener(\'click\', () => {\\n modal?.close();\\n});\\n\\n
Note that if we use dialog.show()
instead of dialog.showModal()
, our <dialog>
element will behave like a non-modal element.
Take a look at the following implementation. It may be simple, but it is fully functional. It is also much easier to integrate and provides greater semantic value than a comprehensive modal solution built entirely with JavaScript:
\\nSee the Pen
\\nThe native <dialog> modal: A basic example by Rahul (@_rahul)
\\non CodePen.
<dialog>
elementA modal interface powered by the HTML <dialog>
element is easy to style and has a special pseudo-class that makes modal elements simple to select and style. I’ll keep the styling part simple for this tutorial and focus more on the basics before delving into the React implementation.
:modal
pseudo-classThe :modal
CSS pseudo-class was specifically designed for UI elements with modal-like properties. It enables easy selection of a dialog displayed as a modal and the application of appropriate styles to it:
dialog {\\n /* Styles for dialogs that carry both modal and non-modal behaviors */\\n}\\n\\ndialog:modal {\\n /* Styles for dialogs that carry modal behavior */\\n}\\n\\ndialog:not(:modal) {\\n /* Styles for dialogs that carry non-modal behavior */\\n}\\n\\n
The choice between these approaches — selecting an element directly to set defaults, selecting its states to apply state-specific styles, or using CSS classes to style the components — is entirely subjective.
\\nEach method offers different advantages, so the most suitable approach for styling will depend on the developer’s preference and the project’s operating procedure. I’ll go the CSS classes route to style our modal.
\\nLet’s enhance it by incorporating rounded corners, spacing, a drop shadow, and some layout properties. You can add or customize these properties according to your specific needs:
\\n.modal {\\n position: relative;\\n max-width: 20rem;\\n padding: 2rem;\\n border: 0;\\n border-radius: 0.5rem;\\n box-shadow: 0 0 0.5rem 0.25rem hsl(0 0% 0% / 10%);\\n}\\n\\n
Additionally, we’ll position the Close button in the top right corner so that it doesn’t interfere with the modal content. Furthermore, we’ll set some default styles for the buttons and input fields used in our application:
\\n.modal-close-btn {\\n font-size: .75em;\\n position: absolute;\\n top: .25em;\\n right: .25em;\\n}\\n\\ninput[type=\\"text\\"],\\ninput[type=\\"email\\"],\\ninput[type=\\"password\\"],\\nbutton {\\n padding: 0.5em;\\n font: inherit;\\n line-height: 1;\\n}\\n\\nbutton {\\n cursor: pointer;\\n}\\n\\n
::backdrop
pseudo-elementWhen using traditional modal components, a backdrop area typically appears when the modal is displayed. This backdrop acts as a click trap, preventing interaction with elements in the background and focusing solely on the modal component.
\\nTo emulate this functionality, the native <dialog>
element introduces the CSS ::backdrop
pseudo-element. Here’s an example illustrating its usage:
.modal::backdrop {\\n background: hsl(0 0% 0% / 50%);\\n}\\n\\n
The user agent style sheet will automatically apply default styles to the backdrop pseudo-element of dialog elements with a fixed position, spanning the full height and width of the viewport.
\\n\\nThe backdrop feature will not function for non-modal dialog elements, as this type of element allows users to interact with the underlying content while the dialog is open.
\\nThe following example showcases an example implementation of all the aforementioned styling. Click the Open the modal button to observe the functionality:
\\nSee the Pen
\\nThe native <dialog> modal: CSS Styling by Rahul (@_rahul)
\\non CodePen.
Notice how the previously mentioned backdrop area works. When the modal is open, you’re not able to click on anything in the background until you click the Close button.
\\n\\n<dialog>
elementNow that we understand the basic HTML structure and styles of our pop-up modal component, let’s transfer this knowledge to React by creating a new React project.
\\nIn this example, I’ll be using React with TypeScript, so the code provided will be TypeScript-specific. However, I also have a JavaScript-based demo of the component we are about to build that you can reference if you are using React with JavaScript instead.
\\nOnce the React project is set up, let’s create a directory named components
. Inside this directory, create a subdirectory called Modal
to manage all of our Modal
dialog component files. Now, let’s create a file inside the Modal
directory called Modal.tsx
:
import { useRef } from \\"react\\";\\n\\nconst Modal = () => {\\n const modalRef = useRef<HTMLDialogElement>(null);\\n\\n return (\\n <dialog ref={modalRef} className=\\"modal\\">\\n {children}\\n </dialog>\\n );\\n}\\n\\nexport default Modal;\\n\\n
In the above code snippet, we simply defined the Modal
component and used the useRef
Hook to create a reference to the HTML <dialog>
element that we could use later in the useEffect
Hooks.
N.B., instead of implementing a React.FC<MyProps>
return type on components in the examples ahead, I have specified direct typing (e.g., props: MyProps
) for component props, which helps React infer return types automatically. This is now the recommended way to structure your components.
To make this component work, we need to consider the following points to determine the props we will need:
\\nprops.children
property provided by ReactThe above points contribute to shaping the type structure of our props, which we will construct using the TypeScript interface as illustrated below:
\\ninterface ModalProps {\\n isOpen: boolean;\\n hasCloseBtn?: boolean;\\n onClose?: () => void;\\n children: React.ReactNode;\\n};\\n\\n
Some properties in the above type are optional to avoid additional setup — such as showing a close button in the modal or executing something on closing the dialog — if not needed.
\\nThe Modal
component will be used for presentational purposes while also utilizing the native HTML5 Dialog API to manage modal visibility. Instead of managing its internal state directly, it receives body and actions through its props and uses the Dialog API methods to control its functioning. This approach not only keeps Modal
simple to implement but also truly reusable in a variety of cases.
With the basic structure of our Modal
component in place, we can now proceed to implement the functionality for opening the modal.
Modal
The useEffect
Hook is ideal for keeping things in sync because it enables performing side effects, such as updating states or interacting with APIs, in response to changes in specific dependencies.
As already discussed, we are not dealing with state variables here. Instead, we will interact with the HTMLDialogElement
API within a useEffect
block, which re-runs whenever the isOpen
prop changes. This will ensure that the visibility of our Modal
stays in sync with isOpen
, allowing the component to respond accurately to external changes.
To implement the HTML <dialog>
modal with React, we will first grab the reference to our modal element in the DOM using the useRef
Hook. If an occurrence is found, we will conditionally switch the modal’s visibility based on the value of isOpen
by utilizing Dialog.showModal()
and Dialog.close()
. Here’s what it will look like:
useEffect(() => {\\n\\n // Grabbing a reference to the modal in question\\n const modalElement = modalRef.current;\\n if (!modalElement) return;\\n\\n // Open modal when `isOpen` changes to true\\n if (isOpen) {\\n modalElement.showModal();\\n } else {\\n modalElement.close();\\n }\\n}, [isOpen]);\\n\\n
Modal
We should now create a utility function that incorporates the optional onClose
callback. This function can be used later to easily close the Modal
dialog in different scenarios:
const handleCloseModal = () => {\\n if(onClose) {\\n onClose();\\n }\\n};\\n\\n
If you observe closely, the ability to close the modal by pressing the escape
key is an inherent feature of the HTML5 <dialog>
element.
However, because our Modal
component depends on the isOpen
Boolean to operate, it will malfunction after being closed by pressing the escape key because the value of isOpen
won’t be updated correctly. To ensure its proper functioning, we should fire handleCloseModal
whenever the escape key is pressed.
To achieve this, we can set up a simple handler function to invoke the handleCloseModal
function whenever the event (KeyboardEvent
) corresponds to the escape
key:
const handleKeyDown = (event: React.KeyboardEvent<HTMLDialogElement>) => {\\n if (event.key === \\"Escape\\") {\\n handleCloseModal();\\n }\\n};\\n\\n
This approach ensures that the modal is closed appropriately when the user presses the escape
key, triggering the logic we wrote in the useEffect
block of the last segment where we utilized the HTMLDialogElement
API methods.
In the final steps, we will utilize the optional hasCloseBtn
prop to include a close button inside the Modal
component. This button will be linked to handleCloseModal
action, which is designed to close the modal as expected.
We will also implement the handleKeyDown
function and associate it with the onKeyDown
event handler for the main HTML5 <dialog>
element that will be returned by the Modal
component.
See the code below:
\\nreturn (\\n <dialog ref={modalRef} onKeyDown={handleKeyDown}>\\n {hasCloseBtn && (\\n <button className=\\"modal-close-btn\\" onClick={handleCloseModal}>\\n Close\\n </button>\\n )}\\n {children}\\n </dialog>\\n);\\n\\n
Accessibility note: When implementing an icon-only close button with no text, make sure to provide a meaningful label with the aria-label
attribute to help screen readers figure out the behavior of the button.
With these updates, our React Modal
component is now fully functional and complete, making use of the powerful HTML5 <dialog>
element and its JavaScript API.
Modal
componentNow, let’s put the modal dialog component to use and observe its functionality.
\\nUsing the Modal
component is pretty straightforward now. We can import the component and start using it as shown in the following code:
import { useState } from \\"react\\";\\n\\nconst App = () => {\\n const [isModalOpen, setModalOpen] = useState<boolean>(false);\\n\\n return (\\n <div className=\\"App\\">\\n <button onClick={() => setModalOpen(true)}>Open Modal</button>\\n\\n <Modal \\n isOpen={isModalOpen} \\n onClose={() => setModalOpen(false)} \\n hasCloseBtn={true}>\\n <h1>Hey</h1>\\n </Modal>\\n </div>\\n );\\n};\\n\\n
As you can see, the parent component manages the Modal component’s opened and closed state and also provides a content body to it.
\\nLet’s cover a more complex example where we construct one of the commonly seen modal UI elements on the web: a newsletter subscription modal dialog. This modal will include some form fields and invite the visitor to sign up for a newsletter subscription.
\\nThe purpose of developing this specific component is to showcase the versatility of the modal pattern for creating various types of modals.
\\nAs established in the previous section, we kept our Modal
component state-free and built it to manage its visibility using the dialog API methods. Our NewsletterModal
component will implement the Modal
component and handle the state for its form elements with the useState
Hook.
With states, we will demonstrate how to manage data in our components, pass it on to the frontend, or toss it over to the backend/database through an API call. For simplicity’s sake, I’m using the useState
Hook provided natively by React.
If you’re new to React, choosing the Context API or a dedicated state management library comes with a learning curve. A good starting point is our guides on managing React states with Context API, Redux, and Zustand.
\\nNewsletterModal
componentThe plan is to create an additional component responsible for managing the form and its data within our newsletter modal dialog. To achieve this, let’s create a new subdirectory named NewsletterModal
under the components
directory.
Within the NewsletterModal
directory, create a new file called NewsletterModal.tsx
, which will serve as our NewsletterModal
component. Optionally, you can also add a NewsletterModal.css
file to style the component according to your requirements.
Let’s begin by importing some essential dependencies, including our Modal
component that we finished in the previous section:
import React, { useState, useEffect, useRef } from \'react\';\\nimport \'./NewsletterModal.css\';\\nimport Modal from \'../Modal/Modal\';\\n\\n
Our newsletter form will comprise two input fields — one to collect the user’s email and the other to allow users to choose their newsletter frequency preferences. We’ll include monthly, weekly, or daily options in the latter field.
\\nTo achieve this, we’ll once again utilize TypeScript interfaces. We’ll also export this interface to reuse it in the main App
component:
export interface NewsletterModalData {\\n email: string;\\n digestType: \\"daily\\" | \\"weekly\\" | \\"monthly\\";\\n}\\n\\n
Next, we’ll define the props that our NewsletterModal
component will receive:
interface NewsletterModalProps {\\n isOpen: boolean;\\n modalData: NewsletterModalData;\\n onSubmit: (data: NewsletterModalData) => void;\\n onClose: () => void;\\n}\\n\\n
As you can see, the NewsletterModal
component expects three props:
isOpen
— A Boolean indicating whether the modal is open or notmodalData
— NewsletterModalData
values to populate Modal
onSubmit
— A function that will be called when the form is submitted. It takes a property of the NewsletterModalData
type as an argumentonClose
— A function that will be called when the user closes the modalTwo of these props, namely isOpen
and onClose
, will further be used as prop values for the Modal
component.
NewsletterModal
componentNow, let’s define the actual NewsletterModal
component. It’s a functional component that takes in the props defined in the NewsletterModalProps
interface. We use object destructuring to extract these props:
const NewsletterModal = ({\\n isOpen,\\n modalData,\\n onClose,\\n onSubmit,\\n}: NewsletterModalProps) => {\\n // Component implementation goes here...\\n};\\n\\n
Next, we use the useRef
Hook to create a reference to the input element for the email field. This reference will be used later to focus on the email input when the modal is opened.
We also use the useState
Hook to create a state variable to manage the form data, initializing it with initialNewsletterModalData
.
See the code below:
\\nconst focusInputRef = useRef<HTMLInputElement>(null);\\nconst [formData, setFormData] = useState<NewsletterModalData>(modalData);\\n\\n
To handle side effects when the value of isOpen
changes, we utilize the useEffect
Hook. If isOpen
is true
and the focusInputRef
is available, not null
, we use setTimeout
to ensure that the focus on the email input element happens asynchronously:
useEffect(() => {\\n if (isOpen && focusInputRef.current) {\\n setTimeout(() => {\\n focusInputRef.current!.focus();\\n }, 0);\\n }\\n}, [isOpen]);\\n\\n
This allows the modal to be fully rendered before focusing on the input.
\\nThe function handleInputChange
is responsible for handling changes in the two form input fields — the user’s email address and newsletter frequency preferences. This function is triggered by the onChange
event of the email input and frequency select elements:
const handleInputChange = (\\n event: React.ChangeEvent<HTMLInputElement | HTMLSelectElement>\\n) => {\\n const { name, value } = event.target;\\n setFormState((prevFormData) => ({\\n ...prevFormData,\\n [name]: value,\\n }));\\n};\\n\\n
When called, the function extracts the name
and value
from the event’s target — in other words, the form element that triggered the change. It then uses the setFormState
state variable to update the form state.
Additionally, the handleInputChange
function uses the callback form of setFormState
to correctly update the state. This preserves the previous form data using the spread operator — ...prevFormData
— and updates only the changed field:
The function handleSubmit
is called when the form is submitted. It is triggered by the onSubmit
event of the form:
const handleSubmit = (event: React.FormEvent): void => {\\n event.preventDefault();\\n onSubmit(formState);\\n};\\n\\n
If the close button in the Modal is clicked, we should roll back to the previous data (modalData
, in this case) and also execute the onClose()
callback:
const handleClose = () => {\\n setFormData(modalData);\\n onClose();\\n};\\n\\n
This function prevents the default form submission behavior using event.preventDefault()
to avoid a page reload. Then, it calls the onSubmit
function from props, passing the current formState
as an argument to submit the form data to the parent component.
After submission, it resets the formState
to initialNewsletterModalData
, effectively clearing the form inputs.
Modal
componentIn the JSX block, we return the Modal
component, which will be rendered with the modal’s content.
We use our custom Modal
component and pass it three props — hasCloseBtn
, isOpen
, and onClose
. The form elements — inputs, labels, and submit button — will be rendered within the Modal
component:
return (\\n <Modal\\n hasCloseButton={true}\\n isOpen={isOpen}\\n onClose={handleClose}\\n >\\n {/* Form JSX goes here... */}\\n </Modal>\\n);\\n\\n
Inside the Modal
component, we render a form
element containing two sections with labels and form elements corresponding to the input
field and select
dropdown. The input
field is for the user’s email, and the select
dropdown allows the user to choose the newsletter frequency.
We bind these elements with the onChange
event handler to update the formState
when the user interacts with the form. The form element has an onSubmit
event that triggers the handleSubmit
function when the user submits the form:
<form onSubmit={handleSubmit}>\\n <div className=\\"form-row\\">\\n <label htmlFor=\\"email\\">Email</label>\\n <input\\n ref={focusInputRef}\\n type=\\"email\\"\\n id=\\"email\\"\\n name=\\"email\\"\\n value={formState.email}\\n onChange={handleInputChange}\\n required\\n />\\n </div>\\n <div className=\\"form-row\\">\\n <label htmlFor=\\"digestType\\">Digest Type</label>\\n <select\\n id=\\"digestType\\"\\n name=\\"digestType\\"\\n value={formState.digestType}\\n onChange={handleInputChange}\\n required\\n >\\n <option value=\\"daily\\">Daily</option>\\n <option value=\\"weekly\\">Weekly</option>\\n <option value=\\"monthly\\">Monthly</option>\\n </select>\\n </div>\\n <div className=\\"form-row\\">\\n <button type=\\"submit\\">Submit</button>\\n </div>\\n</form>\\n\\n
And this concludes our NewsletterModal
component. We can now export it as a default module and move on to the next section, where we will use it and finally see our Modal
component in action.
NewsletterModal
In our App.tsx
file — or any parent component of your choice — let’s begin by importing the necessary dependencies such as React, useState
, NewsletterModal
, and NewsletterModalData
. If desired, we can also use the App.css
or the related component stylesheet to style this parent component:
import React, { useState } from \'react\';\\nimport NewsletterModal, { NewsletterModalData } from \'./components/NewsletterModal/NewsletterModal\';\\nimport \'./App.css\';\\n\\n
As discussed earlier, NewsletterModalData
is an interface that defines the shape of the data to be passed between components to support the data within our NewsletterModal
component.
Within the App
component, we utilize the useState
Hook to establish two state variables:
isNewsletterModalOpen
: A Boolean state variable that tracks whether the newsletter modal is open or not. It is initialized as false
, meaning the modal is initially closednewsletterFormData
: A state object of type NewsletterModalData
that holds the form data submitted through the NewsletterModal
. It is initialized as null
because no data is available initiallyHere’s how the code should look:
\\nconst App = () => {\\n const [isNewsletterModalOpen, setNewsletterModalOpen] =\\n useState<boolean>(false);\\n\\n // Example default data (could be fetched from an API)\\n const defaultNewsletterModalData: NewsletterModalData = {\\n email: \\"\\",\\n digestType: \\"weekly\\",\\n };\\n\\n const [newsletterFormData, setNewsletterFormData] =\\n useState<NewsletterModalData>(defaultNewsletterModalData);\\n\\n // Rest of the component implementation goes here...\\n};\\n\\n
To handle the modal states, we define two functions: handleOpenNewsletterModal
and handleCloseNewsletterModal
. These functions are used to control the state of the isNewsletterModalOpen
variable.
When handleOpenNewsletterModal
is called, it sets isNewsletterModalOpen
to true
, opening the newsletter modal. When handleCloseNewsletterModal
is called, it sets isNewsletterModalOpen
to false
, closing the newsletter modal.
See the code below:
\\nconst handleOpenNewsletterModal = () => {\\n setNewsletterModalOpen(true);\\n};\\n\\nconst handleCloseNewsletterModal = () => {\\n setNewsletterModalOpen(false);\\n};\\n\\n
The handleSubmit
function is called when the user submits the form inside the NewsletterModal
. It takes the form data from the NewsletterModalData
interface as an argument.
When called, the handleSubmit
function sets the newsletterFormData
state variable to the submitted data. After setting the data, it calls handleCloseNewsletterModal
to close the modal:
const handleFormSubmit = (data: NewsletterModalData): void => {\\n setNewsletterFormData(data);\\n handleCloseNewsletterModal();\\n};\\n\\n
Finally, we return the JSX that will be displayed as the UI for the App
component.
In the JSX, we have a div
containing a button. When clicked, this button triggers the handleOpenNewsletterModal
function, thereby opening the newsletter modal.
We check if newsletterFormData
is not null
and if its email
property is truthy. If both conditions are met, we render a message using the data from the newsletterFormData
.
Then, we render the NewsletterModal
component, passing the necessary props — isOpen
, onSubmit
, and onClose
. These props are set as follows:
isOpen
— set to the value of isNewsletterModalOpen
to determine whether the modal should be displayed or notonSubmit
— set to the handleSubmit
function to handle form submissionsonClose
— set to the handleCloseNewsletterModal
function to close the modal when requestedSee the code below:
\\nreturn (\\n <>\\n <div style={{ display: \\"flex\\", gap: \\"1em\\" }}>\\n <button onClick={handleOpenNewsletterModal}>Open the Newsletter Modal</button>\\n </div>\\n\\n {newsletterFormData && newsletterFormData.email && (\\n <div className=\\"msg-box msg-box--success\\">\\n <b>{newsletterFormData.email}</b> requested a <b>{newsletterFormData.frequency}</b> newsletter subscription.\\n </div>\\n )}\\n\\n <NewsletterModal\\n isOpen={isNewsletterModalOpen}\\n onSubmit={handleFormSubmit}\\n onClose={handleCloseNewsletterModal}\\n />\\n </>\\n);\\n\\n
That’s it! We now have our App
component up and running, showing a button to open a functional newsletter modal. When the user submits the form with the appropriate information, that data is displayed on the main app page, and the modal is closed.
Check out the below given CodePen demo below showcasing the implementation of all the code snippets mentioned earlier:
\\nSee the Pen
\\nReact Modal Component with HTML5 Dialog API by Rahul (@_rahul)
\\non CodePen.
For a well-organized and comprehensive version of this project, you can access the complete code on GitHub. Please note that this implementation is written in TypeScript, but it can be adapted to JavaScript by removing the type annotations as I did in this StackBlitz demo.
\\nNowadays, methods for creating modal dialogs no longer rely on third-party libraries. Instead, we can utilize the widely supported native <dialog>
element to enhance our UI modal components. This article provided a detailed explanation for creating such a modal component in React, which can be further extended and customized to suit the specific requirements of your project.
If you have any questions, leave them in the comment section below!
\\n\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\nNowadays, developing web applications that accommodate users from various languages is essential — especially if you’re building large-scale applications. Developing a multi-lingual application can be daunting, but Nuxt i18n makes it easier for Nuxt 3 projects by simplifying content translation, locale handling, and routing, enabling a smoother experience for a global audience.
\\nThis tutorial will guide you through creating a multi-lingual web application using Nuxt 3 and Nuxt i18n. You will learn to set up Nuxt i18n, configure locales, and implement translations. We will build a multi-lingual ecommerce app that displays products to users in three languages depending on a user’s chosen language.
\\nTo follow along, you’ll need:
\\nTo get Nuxt i18n to work, we need to set up a Nuxt 3 project. Go to your command line, navigate to the folder in which you wish to set up your project, and run the code below:
\\nnpx nuxi init ecommmerceDemo\\n\\n
The above code creates an ecommmerceDemo
folder and initializes a Nuxt app inside the folder.
Run the following command to start your app:
\\ncd ecommmerceDemo\\nnpm run dev\\n\\n
You should now have your Nuxt app running in your browser.
\\nBefore we can make our app multi-lingual, we need to first set up its basic features.
\\nLet’s create the folders we need to organize our project. In the root folder, create three folders:
\\ncomponents
pages
static
We also need a JSON file to hold our data. In the static
folder, create a JSON file with the name products
. Then paste the following:
[\\n {\\n \\"id\\": 1,\\n \\"name\\": \\"Timberland boots\\",\\n \\"description\\": \\"Leather boot crafted with your legs in mind\\",\\n \\"price\\": 50\\n },\\n {\\n \\"id\\": 2,\\n \\"name\\": \\"Product B\\",\\n \\"description\\": \\"This is Product B\\",\\n \\"price\\": 75\\n },\\n {\\n \\"id\\": 3,\\n \\"name\\": \\"Product B\\",\\n \\"description\\": \\"This is Product C\\",\\n \\"price\\": 75\\n },\\n {\\n \\"id\\": 4,\\n \\"name\\": \\"Product B\\",\\n \\"description\\": \\"This is Product C\\",\\n \\"price\\": 75\\n },\\n {\\n \\"id\\": 5,\\n \\"name\\": \\"Product B\\",\\n \\"description\\": \\"This is Product C\\",\\n \\"price\\": 75\\n }\\n]\\n\\n
The above file contains information we will display on our app.
\\nNext, create a file called ProductCard.vue
in the components folder and paste the code below:
<script>\\nexport default {\\n props: {\\n title: {\\n type: String,\\n required: true\\n },\\n price: {\\n type: Number,\\n required: true\\n }\\n }\\n};\\n</script>\\n\\n<template>\\n <div class=\\"product-card\\">\\n <h3>{{ item.name }}</h3>\\n <p>{{ item.description }}</p>\\n <p>Price: ${{ item.price }}</p>\\n <button>Add to Cart</button>\\n </div>\\n</template>\\n\\n<style scoped>\\n.product-card {\\n border: 1px solid #ddd;\\n background-color: bisque;\\n padding: 16px;\\n margin: 8px;\\n border-radius: 8px;\\n text-align: center;\\n}\\n\\n.product-button {\\n border: 1px solid blueviolet;\\n border-radius: 8px;\\n padding:5px;\\n}\\n</style>\\n\\n
We’ve just created the ProductCard
component to display our products nicely in our ecommerce store.
Next, create a index.vue
file inside the pages
folder and paste the following code to display the ProductCard
component:
<script>\\nimport products from \'~/static/products.json\';\\nimport ProductCard from \'~/components/ProductCard.vue\';\\n\\nexport default {\\n components: {\\n ProductCard,\\n },\\n data() {\\n return {\\n items: products,\\n };\\n },\\n};\\n</script>\\n\\n<template>\\n <div>\\n <h1>Welcome to our e-commerce store!</h1>\\n <div class=\\"product-list\\">\\n <ProductCard\\n v-for=\\"item in items\\"\\n :key=\\"item.id\\"\\n :item=\\"item\\"\\n />\\n </div>\\n </div>\\n</template>\\n\\n<style scoped>\\n.product-list {\\n display: grid;\\n grid-template-columns: repeat(auto-fill, minmax(200px, 1fr));\\n gap: 16px;\\n padding: 16px;\\n}\\n\\nh1 {\\n text-align: center;\\n margin-bottom: 24px;\\n}\\n</style>\\n\\n
Now, update app.vue
to add some style:
<template>\\n <div>\\n <NuxtPage />\\n </div>\\n</template>\\n\\n<style>\\nbody {\\n background-color: #f0f0f0;\\n display: grid;\\n place-content: center;\\n height: 100vh;\\n text-align: center;\\n font-family: sans-serif;\\n}\\n\\na,\\na:visited {\\n color: #fff;\\n text-decoration: none;\\n padding: 8px 10px;\\n background-color: cadetblue;\\n border-radius: 5px;\\n font-size: 14px;\\n display: block;\\n margin-bottom: 50px;\\n}\\na:hover {\\n background-color: rgb(23, 61, 62);\\n}\\n</style>\\n\\n
Now let’s fire up our app with npm run dev
:
Our app is now working!
\\nNuxt i18n is an internationalization (i18n) module that integrates Vue I18n into Nuxt projects, optimizing performance and SEO. It automatically adds locale prefixes to URLs, provides composable functions for setting locale-based SEO metadata, and supports lazy loading of selected languages, ensuring a user-friendly multi-lingual experience.
\\nTo set up Nuxt i18n, run the following in your terminal:
\\nnpx nuxi@latest module add @nuxtjs/i18n@next\\n\\n
The above code will install the Nuxt i18n module for our project, but we still have some work to do to get Nuxti 18n working on our project.
\\n\\nOpen the next config.ts
file and paste the code below after modules: [\'@nuxtjs/i18n\']
:
i18n: {\\n /* module options */\\n lazy: true,\\n langDir: \\"locales\\",\\n strategy: \\"prefix_except_default\\",\\n locales: [\\n {\\n code: \\"en-US\\",\\n iso: \\"en-US\\",\\n name: \\"English(US)\\",\\n file: \\"en-US.json\\",\\n },\\n {\\n code: \\"es-ES\\",\\n iso: \\"es-ES\\",\\n name: \\"Español\\",\\n file: \\"es-ES.json\\",\\n },\\n {\\n code: \\"in-HI\\",\\n iso: \\"en-HI\\",\\n name: \\"हिंदी\\",\\n file: \\"in-HI.json\\",\\n },\\n ],\\n defaultLocale: \\"en-US\\",\\n },\\n\\n
In the code above, we defined Nuxt i18n properties of the locales, specifying the ISO language codes: en
for English, es
for Spanish, and hi
for Hindi. We also specified the directory for the language translation with langDir: \'locales\'
, and enabled optimized loading of translation files with lazy: true
, which tells Nuxt 3 to load the translation files only when needed. We also set the default language to English with defaultLocale: \'en\'
.
Nuxt i18n translation files are written in the JSON file format. To add languages and translations to our project, we will create an i18n
folder and inside of that, a locale
folder.
For our app, we will add the following languages:
\\nInside the locale folder, we will create three files:
\\nen-US.json
es-ES.json
in-HI.json
In the en-US.json
file, paste the following JSON:
{\\n \\"welcome\\": \\"Welcome to our e-commerce store!\\",\\n \\"product_title\\": \\"Product Title\\",\\n \\"product_price\\": \\"Price\\",\\n \\"product_description\\": \\"Description\\",\\n \\"add_to_cart\\": \\"Add to Cart\\",\\n \\"product_a\\": \\"Timberland boots\\",\\n \\"product_b\\": \\"Nike Snikers\\",\\n \\"product_c\\": \\"Chelsea boots\\",\\n \\"product_a_price\\": \\"200\\",\\n \\"product_b_price\\": \\"250\\",\\n \\"product_c_price\\": \\"300\\"\\n }\\n\\n
In the es-ES.json
file, paste the following JSON:
{\\n \\"welcome\\": \\"¡Bienvenido a nuestra tienda en línea!\\",\\n \\"product_title\\": \\"Título del producto\\",\\n \\"product_price\\": \\"Precio\\",\\n \\"product_description\\": \\"Descripción\\",\\n \\"add_to_cart\\": \\"Añadir a la cesta\\",\\n \\"product_a\\": \\"Botas Timberland\\",\\n \\"product_b\\": \\"Zapatos nike\\",\\n \\"product_c\\": \\"Botas Chelsea\\",\\n \\"product_a_price\\": \\"200\\",\\n \\"product_b_price\\": \\"250\\",\\n \\"product_c_price\\": \\"300\\"\\n}\\n\\n
In the in-HI.json
file, paste the following JSON:
{\\n \\"welcome\\": \\"हमारे ई-कॉमर्स स्टोर में आपका स्वागत है!\\",\\n \\"product_title\\": \\"उत्पाद का शीर्षक\\",\\n \\"product_price\\": \\"कीमत\\",\\n \\"product_description\\": \\"विवरण\\",\\n \\"add_to_cart\\": \\"कार्ट में जोड़ें\\",\\n \\"product_a\\": \\"टिम्बरलैंड बूट्स\\",\\n \\"product_b\\": \\"नाइक स्नीकर्स\\",\\n \\"product_c\\": \\"चेल्सी बूट्स\\",\\n \\"product_a_price\\": \\"200\\",\\n \\"product_b_price\\": \\"250\\",\\n \\"product_c_price\\": \\"300\\"\\n}\\n\\n
We have just created the translation files for our app.
\\nIn this section, we’ll implement a language switcher to use the created translation files to display products in the selected language and update the current locale.
\\nTo keep our code well organized, create a LanguageSwitcher.vue
file inside the component folder, and add the code below:
<template>\\n <div class=\\"language-switcher\\">\\n <select v-model=\\"selectedLocale\\" @change=\\"changeLocale\\">\\n <option v-for=\\"locale in $i18n.locales\\" :key=\\"locale.code\\" :value=\\"locale.code\\">\\n {{ locale.name }}\\n </option>\\n </select>\\n </div>\\n</template>\\n\\n<script>\\nexport default {\\n data() {\\n return {\\n selectedLocale: this.$i18n.locale, // Set initial locale\\n };\\n },\\n methods: {\\n changeLocale() {\\n this.$i18n.setLocale(this.selectedLocale); // Dynamically change locale\\n },\\n },\\n};\\n</script>\\n\\n<style scoped>\\n.language-switcher {\\n margin: 16px 0;\\n}\\n\\nselect {\\n padding: 8px 12px;\\n border: 1px solid #ddd;\\n border-radius: 4px;\\n font-size: 16px;\\n cursor: pointer;\\n}\\n</style>\\n\\n
Based on the code above, when the user selects any option from the select
HTML tag, the @change
event triggers the SwitchLanguage
function, which updates the application locale or language using the JSON files in our locales folder.
Now, let’s update the code for our pages/index.vue
file to get the translation working:
<script>\\nimport products from \'~/static/products.json\';\\nimport ProductCard from \'../components/ProductCard.vue\';\\nimport LanguageSwitcher from \'../components/LanguageSwitcher.vue\';\\n\\nexport default {\\n components: { ProductCard, LanguageSwitcher },\\n data() {\\n return {\\n items: products\\n };\\n }\\n};\\n</script>\\n\\n<template>\\n <div>\\n <LanguageSwitcher />\\n <h1>{{ $t(\'welcome\') }}</h1>\\n <div class=\\"product-list\\">\\n <ProductCard :title=\\"$t(\'product_a\')\\" :price=\\"$t(\'product_a_price\')\\"/>\\n <ProductCard :title=\\"$t(\'product_b\')\\" :price=\\"$t(\'product_b_price\')\\"/>\\n <ProductCard :title=\\"$t(\'product_c\')\\" :price=\\"$t(\'product_c_price\')\\"/>\\n </div>\\n </div>\\n</template>\\n\\n<style scoped>\\n.product-list {\\n display: flex;\\n flex-wrap: wrap;\\n gap: 16px;\\n}\\n</style>\\n\\n
Here is what our app should look like now:
\\n@nuxtjs/i18n adds some metadata to improve your page’s SEO using the useLocaleHead
and definePageMeta()
composable functions.
The module enables several SEO optimizations, including:
\\n<html>
taghreflang
alternate links for better multi-lingual navigationTo configure SEO for our app, let’s first configure the locales
option in nuxt.config.ts
, adding a language
option set to the locale language tags to each object as follows:
export default defineNuxtConfig({\\n ...\\n i18n: {\\n locales: [\\n {\\n code: \\"en-US\\",\\n iso: \\"en-US\\",\\n language: \\"en-US\\",\\n name: \\"English(US)\\",\\n file: \\"en-US.json\\",\\n },\\n {\\n code: \\"es-ES\\",\\n iso: \\"es-ES\\",\\n language: \\"es-ES\\",\\n name: \\"Español\\",\\n file: \\"es-ES.json\\",\\n },\\n {\\n code: \\"in-HI\\",\\n iso: \\"en-HI\\",\\n language: \\"en-HI\\",\\n name: \\"हिंदी\\",\\n file: \\"in-HI.json\\",\\n },\\n ],\\n },\\n});\\n\\n
Then, set the baseUrl
option to a production domain to make alternate URLs fully qualified:
export default defineNuxtConfig({\\n ...\\n i18n: {\\n ...\\n baseUrl: \'<https://my-nuxt-app.com>\',\\n },\\n});\\n\\n
Now, we can call the composable functions in the following places within the Nuxt project:
\\n\\nTo enable the SEO metadata globally, set the meta components within the Vue components in the layouts
directory as follows:
<script setup>\\nconst route = useRoute()\\nconst { t } = useI18n()\\nconst head = useLocaleHead()\\nconst title = computed(() => t(route.meta.title ?? \'TBD\', t(\'layouts.title\'))\\n);\\n</script>\\n\\n<template>\\n <div>\\n <Html :lang=\\"head.htmlAttrs.lang\\" :dir=\\"head.htmlAttrs.dir\\">\\n <Head>\\n <Title>{{ title }}</Title>\\n <template v-for=\\"link in head.link\\" :key=\\"link.id\\">\\n <Link :id=\\"link.id\\" :rel=\\"link.rel\\" :href=\\"link.href\\" :hreflang=\\"link.hreflang\\" />\\n </template>\\n <template v-for=\\"meta in head.meta\\" :key=\\"meta.id\\">\\n <Meta :id=\\"meta.id\\" :property=\\"meta.property\\" :content=\\"meta.content\\" />\\n </template>\\n </Head>\\n <Body>\\n <slot />\\n </Body>\\n </Html>\\n </div>\\n</template>\\n\\n
The useRoute
function retrieves the current route object, including metadata like the page title specified in the route configuration. The t
function from useI18n
translates keys into the active locale’s language, enabling localization. Meanwhile, useLocaleHead
generates localized metadata, such as lang
attributes, hreflang
links, and other SEO-related tags for the current locale.
To override the global SEO metadata, use the definePageMeta()
function within the Vue components in the pages
directory as follows:
<script setup>\\ndefinePageMeta({\\n title: \'pages.title.top\' // set resource key\\n})\\n\\nconst { locale, locales, t } = useI18n()\\nconst switchLocalePath = useSwitchLocalePath()\\n\\nconst availableLocales = computed(() => {\\n return locales.value.filter(i => i.code !== locale.value)\\n})\\n</script>\\n\\n<template>\\n <div>\\n <p>{{ t(\'pages.top.description\') }}</p>\\n <p>{{ t(\'pages.top.languages\') }}</p>\\n <nav>\\n <template v-for=\\"(locale, index) in availableLocales\\" :key=\\"locale.code\\">\\n <span v-if=\\"index\\"> | </span>\\n <NuxtLink :to=\\"switchLocalePath(locale.code)\\">\\n {{ locale.name ?? locale.code }}\\n </NuxtLink>\\n </template>\\n </nav>\\n </div>\\n</template>\\n\\n
The definePageMeta
function sets the page metadata using a key (pages.title.top
) that corresponds to a localized title resource, which is automatically translated into the active language. The useSwitchLocalePath
function generates paths for switching between languages, ensuring correct routing for each locale. The availableLocales
computed property excludes the current locale from the list of all supported locales, showing only the options available for switching.
You can also call the useHead()
function in Vue components in the pages
directory to add more metadata. The useHead()
function will merge the additional metadata to the global metadata:
<script setup>\\ndefinePageMeta({\\n title: \'pages.title.about\'\\n})\\n\\nuseHead({\\n meta: [{ property: \'og:title\', content: \'this is og title for about page\' }]\\n})\\n</script>\\n\\n<template>\\n <h2>{{ $t(\'pages.about.description\') }}</h2>\\n</template>\\n\\n
Nuxt i18n provides a way to add locale prefixes to URLs with routing strategies. It comes packed with four routing strategies:
\\nno_prefix
prefix_except_default
prefix
prefix_and_default
To demonstrate what each of these routing strategies do within our app, let’s revisit our nuxtconfig.ts
file. We will be adjusting strategy: \\"prefix_except_default\\"
to strategy: \\"no_prefix\\"
.
In the following example, no locale-specific prefix is added to the URL:
\\nNow, change the strategy to strategy: \\"prefix_except_default\\"
. This adds a locale-specific prefix to the URL for non-default languages, but the default language does not have a prefix:
If you change the language to English, you will notice there is no URL prefix added to the URL because English is the default language.
\\nNow, change the strategy to strategy: \\"prefix\\"
. This adds a locale-specific prefix to the URL for all languages, including the default:
Now, change the strategy to strategy: \\"prefix_and_default\\"
. This combines all the above strategies, with the added advantage that you will get prefixed and non-prefixed URLs for the default language.
Nuxt i18n Micro is an effective internationalization module for Nuxt. It’s designed to deliver top-notch performance even for large-scale projects, delivering better performance compared to traditional options like @nuxtjs/i18n.
\\nBuilt with speed in mind, Nuxt i18n Micro helps cut down build times, ease server demands, and keep bundle sizes small.
\\nGetting Nuxt i18n Micro working on your project is easy. In your terminal, run the following code:
\\nnpm install nuxt-i18n-micro\\n\\n
Next, add it to your nuxt.config.ts
:
export default defineNuxtConfig({\\n modules: [\\n \'nuxt-i18n-micro\',\\n ],\\n i18n: {\\n locales: [\\n { code: \'en\', iso: \'en-US\', dir: \'ltr\' },\\n { code: \'fr\', iso: \'fr-FR\', dir: \'ltr\' },\\n { code: \'ar\', iso: \'ar-SA\', dir: \'rtl\' },\\n ],\\n defaultLocale: \'en\',\\n translationDir: \'locales\',\\n meta: true,\\n },\\n})\\n\\n
You’re now ready to use Nuxt i18n Micro in your project and compare its speed to Nuxt i18n. Check out the Nuxt documentation to learn more about Nuxt i18n Micro.
\\nTests were conducted under identical conditions to show the efficiency of Nuxt I18n Micro. Both modules were tested with a 10MB translation file on the same hardware to ensure a fair benchmark.
\\n\\n | Nuxt i18n | \\nNuxt i18n Micro | \\n
---|---|---|
Total size | \\n54.7 MB (3.31 MB gzip) | \\n1.93 MB (473 kB gzip) — 96% smaller | \\n
Max CPU usage | \\n391.4% | \\n220.1% — 44% lower | \\n
Max memory usage | \\n8305 MB | \\n655 MB — 92% less memory | \\n
Elapsed time | \\n0h 1m 31s | \\n0h 0m 5s — 94% faster | \\n
\\n | Nuxt i18n | \\nNuxt i18n Micro | \\n
---|---|---|
Requests per second | \\n49.05 [#/sec] (mean) | \\n61.18 [#/sec] (mean) — 25% more requests per second | \\n
Time per request | \\n611.599 ms (mean) | \\n490.379 ms (mean) — 20% faster | \\n
Max memory usage | \\n703.73 MB | \\n323.00 MB — 54% less memory usage | \\n
These results demonstrate that Nuxt i18n Micro significantly outperforms the original module in every critical area.
\\nWhen it comes to SEO optimization for multi-lingual sites, Nuxt i18n Micro simplifies the process by automatically generating essential meta tags and attributes that inform search engines about the structure and content of your site with a single flag.
\\nTo enable automatic SEO management, ensure the meta
option is set to true
in your nuxt.config.ts
file:
export default defineNuxtConfig({\\n modules: [\'nuxt-i18n-micro\'],\\n i18n: {\\n meta: true,\\n },\\n})\\n\\n
Nuxt i18n offers an easy and efficient way to build multi-lingual Nuxt apps that cater to people of different languages. In this tutorial we looked at how Nuxt i18n can be used to achieve the internationalization of apps while specifically looking at how to configure Nuxt i18n, adding locale files, adding a language switcher in Nuxt i18n while building our multi-lingual ecommerce app. Finally, we compared Nuxt i18n to Nuxt i18n Micro, highlighting the performance benefits that the latter module offers.
\\n\\n\\nHey there, want to help make our blog better?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\n<details>
and <summary>
work together?\\n <details>
and <summary>
\\n <details>
group (exclusive accordion)\\n <details>
\\n The <details>
and <summary>
HTML elements, collectively referred to as a disclosure widget, are not easy to style. People often make their own version with a custom component because of the limitations. However, as CSS has evolved, these elements have gotten easier to customize. In this article, I will cover how you can customize the appearance and behavior of a disclosure widget.
<details>
and <summary>
work together?<details>
is an HTML element that creates a disclosure widget in which additional information is hidden. A disclosure widget is typically presented as a triangular marker accompanied by some text.
When the user clicks on the widget or focuses on it and presses the space bar, it opens and reveals additional information. The triangle marker points down to indicate that it is in an open state:
\\nThe disclosure widget has a label that is always shown and is provided by the <summary>
element. This is the first child. If it is omitted, a default label is provided by the browser. Usually, it will say “details”:
You can also provide multiple elements after the <summary>
element to represent the additional information:
<details>\\n <summary>Do you want to know more?</summary>\\n <h3>Additional info</h3>\\n <p>The average human head weighs around 10 to 11 pounds (approximately 4.5 to 5 kg).</p>\\n</details>\\n\\n
<details>
and <summary>
There are a few interoperability issues that should be considered when styling the <details>
and <summary>
elements. Let’s cover the basics before we get into some common use cases.
The <summary>
element is similar to a <li>
element because its default style includes display: list-item
. Therefore, it supports the list-style
shorthand property and its longhand properties. The browser support for the list-style
properties is quite good, but Safari is still lagging.
The disclosure widget has two pseudo-elements to style its constituent parts:
\\n::marker
pseudo-element: Represents the triangular marker that sits at the beginning of <summary>
. The styling story for this is a bit complicated. We are limited to a small set of CSS properties. Browser support is good for ::marker
, but Safari doesn’t currently support the complete set of properties. I will discuss this in more detail in the “Styling the summary marker” section of this article::details-content
pseudo-element: Represents the “additional information” of <details>
. This is a recent addition, so browser support is currently limited to ChromeIn the following sections, I will demonstrate some of the newer, lesser-known ways to customize a disclosure widget.
\\nWhen you open a disclosure widget, it snaps open instantly. Blink, and you will miss it!
\\nIt is preferable to transition from one state to another in a more gradual way to show the user the impact of their action. Can we add a transition animation to the opening and closing actions of a disclosure widget? In short, yes!
\\nTo animate this, we want the height of the hidden content to transition from zero to its final height. The default value of the height
property is auto
, which leaves it to the browser to calculate the height based on the content. Animating to a value of auto
was not possible in CSS until the addition of the interpolate-size
property. While browser support is a bit limited for the newer CSS features we need to use — chiefly interpolate-size
and ::details-content
— this is a great example of a progressive enhancement. It will currently work in Chrome!
Here’s a demonstration video of the animation:
\\n\\n
Here is a CodePen example of the animation.
\\nFirst, we add interpolate-size
so we can transition to a height of auto
:
details {\\n interpolate-size: allow-keywords;\\n}\\n\\n
Next, we want to describe the closed style. We want the “additional info” content to have a height of zero and ensure that no content is visible, i.e., we want to prevent overflow.
\\nWe use the ::details-content
pseudo-element to target the hidden content. I use the block-size
property rather than height
because it’s a good habit to use logical properties. We need to include content-visibility
in the transition because the browser sets content-visibility: hidden
on the content when it is in a closed state — the closing animation will not work without including it:
/* closed state */\\ndetails::details-content {\\n block-size: 0;\\n\\n transition: content-visibility, block-size;\\n transition-duration: 750ms;\\n\\n transition-behavior: allow-discrete;\\n overflow: hidden;\\n}\\n\\n
The animation still won’t work as expected because the content-visibility
property is a discrete animated property. This means that there is no interpolation; the browser will flip between the two values so that the transitioned content is shown for the entire animation duration. We don’t want this.
If we include transition-behavior: allow-discrete;
, the value flips at the very end of the animation, so we get our gradual transition.
Also, we get content overflow by setting the block-size
to 0
when the disclosure widget is in an intermediate state. We show most of the content as it opens. To prevent this from happening, we add overflow: hidden
.
Lastly, we add the style for the open state. We want the final state to have a size of auto
:
/* open state */\\ndetails[open]::details-content {\\n block-size: auto;\\n}\\n\\n
Those are the broad strokes. If you would prefer a more detailed video explanation, check out Kevin Powell’s walkthrough for how to animate <details>
and <summary>
.
The disclosure widget may grow horizontally if the “additional information” content is wider than the <summary>
content. That may cause an unwanted layout shift. In that case, you may want to set a width on <details>
.
Like any animation, you should consider users who are sensitive to motion. You can use the prefers-reduced-motion
media query to cater to that scenario:
>@media (prefers-reduced-motion) {\\n /* styles to apply if a user\'s device settings are set to reduced motion */\\n\\n details::details-content {\\n transition-duration: 0.8s; /* slower speed */\\n }\\n}\\n\\n
<details>
group (exclusive accordion)A common UI pattern is an accordion component. It consists of a stack of disclosure widgets that can be expanded to reveal their content. To implement this pattern, you just need multiple consecutive <details>
elements. You can style them to visually indicate that they belong together:
<details>\\n <summary>Payment Options</summary>\\n <p>...</p>\\n</details>\\n<details>\\n <summary>Personalise your PIN</summary>\\n <p>...</p>\\n</details>\\n<details>\\n <summary>How can I add an additional cardholder to my Platinum Mastercard</summary>\\n <p>...</p>\\n</details>\\n\\n
The default style is fairly simple:
\\nEach <details>
occupies its own line. They are positioned close together (no margin or padding) and are perceived as a group because of their proximity. If you want to emphasize that they are grouped together, you could add a border and give them the same background styles as shown in the example below:
See the Pen
\\n<details> stack – an accordion even by rob2 (@robatronbobby)
\\non CodePen.
A variation of this pattern is to make the accordion exclusive so that only one of the disclosure widgets can be opened at a time. As soon as one is opened, the browser will close the other. You can create exclusive groups through the name
attribute of <details>
.
Having the same name
forms a semantic group:
<details name=\\"faq-accordion\\">\\n <summary>Payment Options</summary>\\n <p>...</p>\\n</details>\\n<details name=\\"faq-accordion\\">\\n <summary>Personalise your PIN</summary>\\n <p>...</p>\\n</details>\\n<details name=\\"faq-accordion\\">\\n <summary>How can I add an additional cardholder to my Platinum Mastercard</summary>\\n <p>...</p>\\n</details>\\n\\n
See the Pen
\\n<details> exclusive group by rob2 (@robatronbobby)
\\non CodePen.
Before using exclusive accordions, consider if it is helpful to users. If users are likely to want to consume more of the information, this will require them to open items often, which can be frustrating.
\\nThis feature is currently supported in all modern browsers so you can use it right away.
\\nA disclosure widget is typically presented with a small triangular marker beside it. In this section, we’ll cover the process of styling this marker.
\\nThe marker is associated with the <summary>
element. The addition of the ::marker
pseudo-element means that we can style the marker box directly. However, we are limited to a small set of CSS properties:
color
white-space
text-combine-upright
, unicode-bidi
, and direction
propertiescontent
As mentioned earlier, <summary>
is similar to a <li>
; it supports the list-style
shorthand property and its longhand properties. While this might sound a bit hodge-podge, it will be easier to understand the styling options with some examples.
Before jumping into examples, a quick word on browser support. At the time of writing, Safari is the only major browser that doesn’t fully support styling the marker:
\\ncolor
and font-size
properties of the ::marker
pseudo-element. Safari supports the non-standard pseudo-element ::-webkit-details-marker
list-style
properties at all. See CanIUse for referenceSay we wanted to change the color of the triangular marker to red and make it 50% larger. We can do the following:
\\nsummary::marker {\\n color: red;\\n font-size: 1.5rem;\\n}\\n\\n
This should work across all browsers. Here’s the CodePen example.
\\nBy default, the marker is to the side of the text content of <summary>
and they are in the same bounding box. The list-style-position
is set to inside
. When it is an open state, the “additional information” is directly underneath the marker. Perhaps you want to change the spacing and alignment of this:
If we set list-style-position
to outside
, the marker sits outside of the <summary>
bounding box. This enables us to adjust the space between the summary text and the marker:
summary {\\n list-style-position: outside;\\n padding-inline-start: 1rem;\\n}\\n\\n
You can see this in the second instance in the screenshot above. Here is a CodePen of this example.
\\nIf you want to change the content of the marker, you can use the content
property of the ::marker
pseudo-element. Based on your preferences, you can set it to text
. For my example, I used the zipper mouth emoji for the closed state and the open mouth emoji for the open state:
summary::marker {\\n content: \'🤐 \';\\n font-size: 1.2rem;\\n}\\n\\ndetails[open] summary::marker {\\n content: \'😮 \';\\n}\\n\\n
See the Pen
\\n<details> with custom text marker by rob2 (@robatronbobby)
\\non CodePen.
To use an image for the marker, you can use the content
property of the ::marker
pseudo-element, or the list-style-image
property of <summary>
:
summary::marker {\\n content: url(\\"arrow-circle-right.svg\\");\\n\\n /* you can use a data URI too */\\n content: url(\'data:image/svg+xml,<svg height=\\"1em\\" width=\\"1em\\" viewBox=\\"0 0 24 24\\" xmlns=\\"http://www.w3.org/2000/svg\\"><path fill=\\"white\\" d=\\"M12 16.725L16.725 12 12 7.275 10.346 8.93l1.89 1.89h-4.96v2.36h4.96l-1.89 1.89zm0 7.087q-2.45 0-4.607-.93-2.156-.93-3.75-2.525-1.595-1.593-2.525-3.75Q.188 14.45.188 12q-.002-2.45.93-4.607t2.525-3.75q1.592-1.594 3.75-2.525Q9.55.188 12 .188q2.45 0 4.607.93 2.158.93 3.75 2.525 1.593 1.593 2.526 3.75.933 2.157.93 4.607-.004 2.45-.93 4.607-.93 2.157-2.526 3.75-1.597 1.594-3.75 2.526-2.154.932-4.607.93\\"/></svg>\');\\n}\\n\\n/* OR */\\n\\nsummary {\\n list-style-image: url(\\"arrow-circle-right.svg\\");\\n}\\n\\n
In the following example, we are using two arrow icons from Material Symbols for the marker. The right-facing arrow is for the closed state, and the down-facing arrow is for the open state:
\\nSee the Pen
\\n<details> with SVG marker by rob2 (@robatronbobby)
\\non CodePen.
These examples will work as expected in Chrome and Firefox, but Safari will ignore the styles. You can approach this as a progressive enhancement and call it a day. But if you want the same appearance across all browsers, you can hide the marker and then add your own image as a stand-in. This gives you more freedom:
\\n/* Removes default marker. Please consider accessibility, read below. */\\nsummary::-webkit-details-marker {\\n display: none;\\n}\\n\\nsummary {\\n list-style: none;\\n}\\n\\n
You can visually indicate the state using a new marker icon, such as an inline image or via pseudo-elements. The <summary>
already (mostly) indicates the expand/collapse state. So if you use an inline graphic, it should be treated as decorative. An empty alt
attribute does this:
<!-- You can add your own image inside <summary> as a decorative element instead of the hidden marker. --\x3e\\n<details>\\n <summary><img src=\\"my-marker.png\\" alt>Do you want to know more?</summary>\\n <div>Yes</div>\\n</details>\\n\\n
You can choose to position the marker at the end of <summary>
, too, if you wish:
<!-- You can place it at the end of the summary text --\x3e\\n<details>\\n <summary>Do you want to know more?<img src=\\"my-marker.png\\" alt></summary>\\n <div>Yes</div>\\n</details>\\n\\n
However, it is important to note that hiding the marker causes accessibility issues with screen readers. Firefox, VoiceOver, JAWS, and NVDA all have an issue with consistently announcing the toggled state of the disclosure widget if the marker is removed. Unfortunately, the style is tied to the state. It is preferable to avoid doing this.
\\n<details>
You may want to style the “additional information” section of the disclosure widget without leaking styles to the <summary>
. Because you can have a variable number of elements inside a <details>
, it would be nice to have a catch-all rule:
<details>\\n <summary>Do you want to know more about styling the hidden section?</summary>\\n <h2>Styling hidden section</h2>\\n <p>Tell me more.</p>\\n <div>This is a div</div>\\n</details>\\n\\n
My go-to is to exclude the <summary>
element using the :not()
function. Just keep in mind that this targets each element rather than the content as a single section!
details > *:not(summary) {\\n color: palegoldenrod;\\n font-size: 0.8em;\\n margin-block-start: 1rem;\\n}\\n\\n
Alternatively, you can use the ::details-content
pseudo-element, which targets the entire section. This is why you want to use this for animating the opening and closing state transitions:
/* browser support is limited */\\ndetails::details-content {\\n color: palegoldenrod;\\n font-size: 0.8em;\\n margin-block-start: 1rem;\\n}\\n\\n
Notice the difference? There is only one margin at the start of the section. The <p>
and <div>
do not have margins.
The downside of using the ::details-content
pseudo-element is that browser support is currently limited to Chrome.
display
type of the <details>
element. This restriction has been relaxed in Chromedisplay
type of <summary>
. The default is display: list-item;
; if you change it to display: block;
, it may result in the marker being hidden in some browsers. This was an issue in Firefox:\\n/* This may cause the marker to be hidden in some browsers */\\nsummary {\\n display: block;\\n}\\n
<details>
<summary>
element has a default ARIA role of button
, it strips all roles from child elements. Therefore, if you want to have a heading like a <h2>
in a <summary>
, assistive technologies such as screen readers won’t recognize it as a heading. Try to avoid this pattern:\\n<!-- h2 is not recognized as a heading by assistive technologies --\x3e\\n<details>\\n <summary><h2>Spoilers</h2></summary>\\n <ol>\\n <li>Steven Spielberg shot the film in chronological order to invoke a real response from the actors (mainly the children) when E.T. departed at the end. All emotional responses from that last scene are real.</li>\\n <li>When E. T. is undergoing medical treatment, an off-camera voice says, \\"The boy\'s coming back. We\'re losing E.T.\\" The person delivering this line is Melissa Mathison, who wrote the screenplay for the film.</li>\\n </ol>\\n</details>\\n
Recently, there was a big proposal to help make <details>
more customizable and interoperable between browsers. Phase 1 includes some of what I covered in this article:
display
property restrictions so you can use other display
types like flex
and grid
::details-content
pseudo-element to address the second slot so that a container for the “additional information” in the <details>
element can be styledThe exciting news is items 1 and 3 in the list above have shipped in Chrome 131 (as of November 2024). The next phase should be tackling improving the styling of the marker. Additionally, there is a set of related changes that will help improve the ability to animate these elements.
\\nThe <details>
HTML element has gotten much easier to customize in CSS. You can now make exclusive groups with full browser support, animate the transition of opening/closing states as a progressive enhancement, and perform simple styling of the marker.
The Achilles’ heel of <details>
is the styling of the marker. The good news is that there is an active proposal that addresses this and some other pain points. This should remove all of the stumbling blocks when using <details>
. In the near future, you won’t need to write your own disclosure widget or use a third-party web component! 🤞
Would you be interested in joining LogRocket\'s developer community?
\\n\\n Join LogRocket’s Content Advisory Board. You’ll help inform the type of\\n content we create and get access to exclusive meetups, social accreditation,\\n and swag.\\n
\\n Sign up now\\n