In-Depth Article
React Concurrent Rendering: The Underlying Principle
Understand how Fiber, scheduling/time slicing, lanes, and double buffering work together to make concurrent rendering responsive and consistent.
At a surface level, React concurrent rendering can sound like “rendering can be interrupted” or “the UI feels more responsive.” But under the hood, it is not a single feature. It is the result of several mechanisms working together.
You can think of it as being built on four core pillars:
Fiber architecture: breaks rendering into units of work that can be paused and resumed
Scheduler and Time Slicing: decides when React should keep working and when it should yield control back to the browser
Lane Model: manages different updates with more fine-grained priorities
Double Buffering: makes sure users never see a half-finished UI during concurrent rendering
These are not isolated features. Together, they form the foundation of React’s concurrent rendering model.
1. Fiber: the foundation of concurrent rendering
To understand concurrent rendering, you first need to understand why React introduced Fiber in the first place.
In React 15 and earlier, React used the Stack Reconciler. That rendering model was essentially based on the JavaScript call stack: once rendering started, it had to keep going until the entire tree was finished. It could not pause in the middle, and it could not switch to a more important task.
That created a clear problem:
If the component tree was large, the main thread could be blocked for too long. The browser would not get a chance to handle input, animations, or painting in time, which led to jank and dropped frames.
To solve that, React redesigned its internal rendering structure. That redesign is what we call Fiber.
What Fiber actually is
A Fiber is a special node structure. Each Fiber node usually represents one unit of work in the component tree, and it stores the information React needs for rendering that node, such as:
component type
props and state
corresponding DOM information
effect flags
relationships to parent, child, and sibling nodes
Instead of relying on a recursive call stack that must run to completion, React uses a structure it can control and resume on its own.
From a traversal perspective, a Fiber tree is not just a normal recursive tree walk. With pointers like child, sibling, and return, React turns the component tree into a structure that can be traversed, paused, and resumed more flexibly.
What Fiber makes possible
The biggest value of Fiber is not just that React changed its internal data structure. The real value is that Fiber gives React these abilities:
break a large render into smaller units of work
pause after finishing one unit
keep track of where it stopped
continue later from the exact same spot
So in one sentence:
Fiber turns rendering from one giant blocking task into many smaller units of work that can be interrupted.
That is the foundation that makes concurrent rendering possible.
2. Scheduler and Time Slicing: deciding when to pause and when to continue
Fiber gives React the ability to pause. But that still leaves an important question:
When should React keep rendering, and when should it stop and give control back to the browser?
That is the job of the Scheduler.
Why React needs its own scheduler
JavaScript in the browser runs on a single thread. React cannot truly preempt a running JavaScript task the way an operating system can preempt a process. It cannot forcibly cut a task off in the middle. What React can do instead is use cooperative scheduling.
That means React has to check at the right times:
have we already used enough time in this slice?
does the browser need the main thread for something more urgent?
should we pause now and continue later?
This is what people mean by cooperative multitasking in React.
What time slicing means
One of the core ideas behind concurrent rendering is Time Slicing.
Instead of letting React render the whole tree in one uninterrupted run, React only works for a short slice of time, then checks whether it should continue.
A simplified flow looks like this:
React starts processing part of the Fiber work
after finishing one or a few units of work, it checks whether it should continue
if there is still time, it keeps going
if time is running out, or the browser needs the thread, React pauses the current render
React gives control back to the browser
after the browser handles painting, input, or other urgent work, React comes back and continues the unfinished render
That is the basic idea of time slicing:
React no longer monopolizes the main thread for long stretches. It breaks rendering into smaller chunks and fits them into the browser’s rhythm.
How React actually schedules work
React does not rely entirely on requestIdleCallback. One reason is that its behavior is not stable enough across environments, and React needs more predictable control over scheduling.
So React implements its own Scheduler. Depending on the host environment, it uses available mechanisms to continue work later. In the browser, this often involves tools like MessageChannel to schedule follow-up work in future turns of the event loop.
The key idea is not that React invented multithreading. It did not. The better way to think about it is this:
React controls how work is split up and when it resumes, so that in a single-threaded environment it can simulate work that feels pausable and resumable.
3. Lane Model: the priority system behind concurrent rendering
Being able to pause and resume is not enough on its own. The harder part of concurrent rendering is this:
If multiple updates happen around the same time, which one should React handle first?
For example:
the user clicks a button and triggers a state update
an input field is receiving continuous typing
somewhere else, React is rendering a large list at low priority
a startTransition update is also waiting to be processed
Clearly, not all of these updates are equally important.
To solve this, React 18 introduced the Lane Model.
What a lane is
A lane is basically a priority channel.
Instead of using one single priority number for everything, React assigns updates to different lanes. Internally, these lanes are typically represented with bitmasks.
You can roughly picture it like this:
export const SyncLane = 0b...0001;
export const InputContinuousLane = 0b...0100;
export const DefaultLane = 0b...10000;
export const TransitionLane = 0b...1000000;
The exact values are not the important part. What matters is this:
React uses bits to represent which priority lanes are involved in the current work.
Why React uses bit operations
The Lane model has a few important advantages.
First, it can represent multiple priorities efficiently
An update does not always belong to only one isolated priority. Multiple updates can coexist, and React needs an efficient way to represent which lanes are currently pending.
Bitmasks are a very efficient way to express that kind of set.
Second, merging and filtering are cheap
Operations like “merge these lanes,” “find the highest-priority pending lane,” or “check whether an update belongs to this batch” can all be done efficiently with bitwise operators.
Third, it works well for concurrent rendering
In a concurrent system, React often needs to reevaluate which task is the most important to work on next. The Lane model makes that process more flexible and more efficient.
Why higher-priority work can interrupt lower-priority work
This is one of the most important benefits of the Lane model.
Suppose React is currently working on a low-priority update, such as rendering a large list inside a Transition. Then the user clicks a button, which creates a higher-priority synchronous update.
React can tell that:
the current work is in a lower-priority lane
the new update belongs to a higher-priority lane
So React switches to the more important work first.
The key point is not that React magically slices the old work into half and stores it like a thread context. The real point is:
React can pause low-priority rendering, switch to higher-priority updates, and if necessary restart the low-priority render later based on the newest state.
From the outside, it looks like higher-priority work “cuts in line.”
That is why concurrent rendering helps keep user input and interactions responsive.
4. Double Buffering: making sure the UI never shows a half-finished state
Concurrent rendering allows rendering work to be paused, resumed, and even thrown away. That raises a very important question:
If React stops in the middle of rendering, could the user see a partially updated UI?
React’s answer is no. That is where Double Buffering comes in.
React keeps two Fiber trees
In memory, React usually maintains two closely related Fiber trees:
Current Tree: the tree that represents what is currently shown on the screen
WorkInProgress Tree: the new tree React is building in the background
You can think of them like this:
one is the live version
one is the draft version
Where concurrent rendering actually happens
All the new updates, diffing, Hook calculations, and most render-phase work happen on the WorkInProgress Tree.
That means:
React can build the new tree in the background
it can pause
it can resume
it can even throw away the draft and start over if a higher-priority update arrives
But no matter what is happening in the background, the user still sees the old Current Tree.
So the user never sees a half-updated list or a button that has only partially re-rendered.
Why the Commit phase is atomic
Once the WorkInProgress Tree is fully ready, React enters the Commit phase.
The Commit phase has a few important characteristics:
it is still synchronous
it is not something React time-slices and interrupts in the same way
it is responsible for applying the already-calculated result to the real UI all at once
At a high level, you can think of it like this:
React prepares the next version of the UI in the background, and only when the whole version is ready does it switch the screen to that finished result.
That is why double buffering preserves UI consistency.
A useful analogy is graphics rendering:
the next frame is drawn in a back buffer
only after the frame is complete does it get swapped to the front buffer
the user only ever sees complete frames, not something half-drawn
React’s Current Tree and WorkInProgress Tree work in a very similar way.
5. How these four mechanisms work together
If you connect everything into one flow, concurrent rendering roughly works like this:
- An update arrives
A state update is triggered. React first determines which lane the update belongs to, meaning what its priority is.
- React creates or reuses the WorkInProgress Tree
React creates or reuses a WorkInProgress Tree based on the current tree, and prepares to do the next render on that tree.
- Fiber starts processing units of work one by one
React moves through the Fiber structure step by step, running component functions, calculating new state, diffing child trees, and collecting side effects.
- The Scheduler decides whether to pause
During this work, the Scheduler keeps checking:
is the current time slice almost used up?
did a more urgent task come in?
should React yield the main thread back to the browser now?
If needed, React pauses the current render.
- Higher-priority work can interrupt lower-priority work
If a higher-priority update arrives, such as user input or a click, React switches to that lane first. Lower-priority work can continue later, or be restarted based on fresher state.
- Once the WorkInProgress Tree is finished, React commits it
Only after the whole WorkInProgress Tree finishes rendering does React enter Commit and synchronously apply the result.
- The user only sees complete results
Because the screen keeps showing the Current Tree until Commit happens, the user never sees an intermediate state.
6. A more intuitive way to think about it
You can picture React concurrent rendering as a system like this:
Fiber
Like breaking a large task into many smaller steps, where you can pause after each step instead of having to finish everything in one go.
Scheduler
Like a coordinator that keeps asking whether you should keep working or handle something more urgent first.
Lane Model
Like a priority system. Not every task matters equally. A call from the CEO obviously matters more than a routine notification.
Double Buffering
Like writing everything on a draft page first, and only copying it to the final document when it is fully finished. That way no one ever sees the half-written version.
7. Summary
React concurrent rendering is not simply “asynchronous rendering” or “multithreaded rendering.” It still runs in a single-threaded JavaScript environment. What React does is use a carefully designed set of mechanisms to make rendering pausable, resumable, reprioritizable, and still consistent from the user’s point of view.
Under the hood, it mainly depends on these four pieces:
Fiber: breaks rendering into interruptible units of work
Scheduler and Time Slicing: lets React pause and resume work at the right moments
Lane Model: gives different updates fine-grained priority control
Double Buffering: ensures users only ever see complete and stable UI
You can compress the whole idea into one sentence:
The essence of React concurrent rendering is that, in a single-threaded environment, React uses Fiber, scheduling, priority lanes, and double buffering to turn what used to be a non-interruptible synchronous render into a rendering process that can be paused, resumed, reprioritized, and still remain visually consistent for the user.
Reviewed by
DevDepth Editor
Editor and frontend engineering writer
DevDepth publishes practical guides on React, Next.js, TypeScript, frontend architecture, browser APIs, and performance optimization.
Each article should be reviewed for technical accuracy, code clarity, metadata quality, and internal-link fit before it goes live.
Last editorial review: 2026-03-16