23 Apr 2018, 19:37

The Rise of JavaScript Scheduling

What is scheduling?

At a high level, scheduling can be thought of as a way of splitting up work and allocating it to be completed by a compute resource. The work can be processed at some point in the future, in an order, and by a given resource that the scheduler specifies.

For example, if we had a set of tasks of equal importance, lets say calculating billing and distributing invoices for users of your Software as a Service platform, we could use a scheduler to distribute that work in a way that was evenly distributing processing time to each task. If the tasks were of differing importance, we could allow the scheduler to prioritise work, ensuring more processing time was allocated to performing higher prioritised tasks. For instance, you might prioritise the work loads of paying customers over free users.

At a more granular level there are many different approaches of varying complexity to scheduling work. Some examples of mainstream scheduling strategies include: first in first out, earliest deadline first and round robin. The wiki for this is actually really strong, so I’ll defer to that for an in depth explanation.

Why schedule work in JavaScript?

Let us consider why a scheduler may be of value in JavaScript. In web development we have the case that:

“By default the browser uses a single thread to run all the JavaScript in your page as well as to perform layout, reflows, and garbage collection. This means that long-running JavaScript functions can block the thread, leading to an unresponsive page and a bad user experience”. - MDN

There are ways to offload work onto other threads via Web Workers, but these have limitations such as not being able to access the DOM, and having to copy data from the main thread and back again (RIP SharedArrayBuffers). In some ways the single threaded-ness of JavaScript is useful; we would have a complex overhead of managing multiple race conditions of threads trying to access the DOM.

Fundamentally, the JavaScript thread of the browser works by way of the event loop, which cycles round executing queued work to be performed. At a high level we have three major types of work that the the event loop processes:

  • Tasks - Event handlers, setTimeouts, setInterval, etc
  • Microtasks - MutationObserver callbacks, Promises - these get executed whenever the JavaScript call stack empties
  • Rendering steps - requestAnimationFrame queued work, Style, Layout, Paint

Knowing this helps us write and/or understand a well planned scheduler in conjunction with our own code. We can break up long running tasks into smaller tasks and interleave them with other work, for example performing layouts and repaints in an efficient manner.

The specifics of the event loop are much better left to others; specifically I would recommend Jake Archibald’s talk at JSConf Asia 2018 which is a superb elucidation on the subject (he also has a blog post).

User experience and scheduling

A recurring problematic theme in web applications is the the idea of jank; low frame rates and interactivity for end users. Having talked a bit about the event loop, how might we leverage that understanding to better improve our sites user experience? One prime example is Wilson Page’s fastdom which was one of the first schedulers I came across in late 2013. The core premise is that it’s possible to batch up DOM reads and writes, and then schedule them using requestAnimationFrame for noticeably smoother animations. requestAnimationFrame allows developers to schedule DOM updates right before the browser performs the next render cycle (style, layout, paint, composite). This prevents work being done mid frame causing it to miss the frame as shown in the following diagram (thanks Google). Clever stuff!

This approach can noticeably improve user experience for DOM heavy sites. However, fastdom as a library is predominately about preventing layout thrashing, and generic work scheduling is outside of its scope. Furthermore,requestAnimationFrame is arguably not an appropriate tool in and of itself to handle generic work; each request will run in it’s intended frame and does not distribute them across multiple frames.

Adoption by frameworks

Within the past few of years we’ve seen an increased interest in scheduling work by frameworks. The single threaded nature of JavaScript, and the plethora of tasks that need to be completed to allow users to navigate a modern web application pose an interesting challenge - especially for framework developers. We may wish to be animating elements whilst also accepting user input, and sending off that input to a server. There is a possibility that user input and associated handling could cause our frames to take too long to render, resulting in a rough user experience.

Arguably one of the earlier stage examples of scheduling is the AngularJS (1) digest cycle. AngularJS came out in 2010, and it had its own built in event cycle (a scheduler of sorts) for handling its notorious two way data-binding system. An overview diagram can be seen below. The digest cycle checks for changes between the view data model and the DOM, and then re-renders after the cycle to reflect those changes. Certain elements of the cycle, by the documentations own omission, could be considered problematic, for example setTimeout(0) can be janky as the browser repaints after every event.

Angular, AngularJS’s successor has a different approach to change detection and updates which can be read about here. It also does some interesting things with asynchronous execution contexts (zones), allowing for a smarter way of doing operations like updating the DOM, error handling and debugging.

Vue.js takes the approach of batching DOM updates asynchronously. There is no ‘digest cycle’ as per AngularJS as Vue.js encourages a data driven approach. Here the batched operations are then flushed out to the DOM on a given tick cycle. The queue internally uses Promise.then and MessageChannel, with a fall back to setTimeout(fn, 0) if those aren’t available.

Other frameworks have explored a holistic view of how to handle the stream of user interactions, network requests, data flow and rendering. A prime and recent example of this is React. React Fiber, fully implemented in React 16, uses scheduling to improve perceived performance and responsiveness in complex web applications). In the world of the web not all computational work may be of equal importance to a user. For example typing and receiving immediate feedback may be a more critical interaction than having a dashboard receiving external data and updating instantaneously. The React team has done a lot work to it’s reconciliation algorithm to prioritise work in a way that is conducive to a pleasant user experience. This is done fundamentally by scheduling different updates at different priorities. Traditionally, all updates were treated synchronously, with no prioritisation. React’s reconciliation phase was uninterruptible, which could lead to low framerates for complex work loads. Here is an example of the Chrome profiler showing it taking ~750ms to render a frame (the green bar):

The way in which React Fiber attempts to keep consistent framerates is via what they have dubbed time slicing. The process uses requestIdleCallback to defer low priority work to the browser’s idle times. Fiber also estimates the number of milliseconds of time remaining in the current idle stage, and when this elapses stops work to give the browser time to render a frame. For deeper explanation, check out Giamir Buoncristiani’s post, the React Fiber Architecture README by Andrew Clark and also Dan Abramov’s talk at this years Iceland JS Conf. In Dan’s demonstration you can clearly see the difference between the synchronous and asynchronous work patterns:

Another project that has (very recently) been leveraging scheduling is StencilJS. For those of you who aren’t familiar, StencilJS is a modern web framework that takes popular features from many recent frameworks, and combines this with compilation down to native Web Components. Manu Mtz-Almeida of the StencilJS team has recently been pushing commits for scheduling into the core framework, demoing some of his recent work on Twitter:

You can see that the end goal is similar here to what React is doing with Fiber; try and keep the browser rendering at 60fps for a smooth user experience, whilst still completing non-rendering work in reasonable time-frames. Hopefully we’ll be seeing more from the Stencil team in the future!

The building blocks of scheduling in the browser

There are many ways we could build a scheduler in JavaScript. Traditionally we might have implemented it using JavaScript and browser APIs such as:

  • setTimeout - Execute a given function at some point in the future
  • setInterval - Execute a given function on some recurring schedule (every x milliseconds)
  • Promises - An object representing the eventual success or failure of asynchronous operation

With ever improving browser standards we also have some interesting additional browser APIs that might help us to write smarter schedulers:

  • requestIdleCallback - Schedule work to be done during the browsers idle periods. Supports deadlines.
  • performance.now - Granular timing API for modern browsers
  • async/await - Allows developers to write easier and cleaner asynchronous code, making it feel more synchronous

As well these, there is the previously mentioned requestAnimationFrame for visual changes. Interestingly React was originally relying on requestIdleCallback for Fiber, but now they’ve written their own polyfill for this (at time of writing). Indeed there is no single way to write a scheduler, and you could use a myriad of these features to create one. Getting this right appears to be a relatively tricky endeavour; in the words of Bertalan Mikolos “timing is a delicate thing, and slight mistakes can cause some very strange bugs.”.

Personally, the strongest example so far I’ve seen of a generic purpose scheduler is Justin Fagnani’s queue-scheduler a framework agnostic JavaScript scheduler. Here he uses async/await performance.now(), requestIdleCallback and requestAnimationFrame to allow developers to schedule work. It’s worth examining the source code to see how these are used (FYI: it’s written in TypeScript).

Final thoughts

Overtime we have seen a more progressive approach towards scheduling. Modern schedulers such as those in React and StencilJS have been written in a way that keeps end users at their heart, keeping frame rates and interactivity high. It is fair to say that with React (arguably the most popular JavaScript framework in modern applications) having taken scheduling to the core of it’s architecture, we have seen scheduling become mainstream for web developers. We also see API compatible libraries such as Preact looking to follow suit.

With teams like StencilJS following suit with their user centric scheduler, there is strong evidence to suggest that smarter scheduling may become commonplace across many approaches to building web applications. I haven’t seen much work done with Web Workers and scheduling, but feel this could be a strong contender for future work as inline Web Worker libraries have become more popular and Web Workers are non blocking on the main thread. I think this is especially true for long running tasks, see for example a little demo I did making a library called Fibrelite for offloading processing to an inline Web Worker.

03 Apr 2018, 19:37

Cancelling Requests with Abortable Fetch

There are often times in a web application where you need to send a request for the latest user input or interaction. Some examples might be a autocomplete or zooming in and out a map. Let’s think about each of these examples for a moment. Firstly autocomplete; every time we type (or maybe less if we were to debounce) we might send out a request. If the user input changes the old requests might become irrelevant as we keep typing (i.e. ‘java’ and ‘javascript’). That’s potentially a lot of redundant requests before we get to what we’re interested in!

Now the web map case; we’re zooming and panning around the map. As we zoom in and out, we are no longer interested in the tiles from the previous zoom levels. Again, lots of requests might be pending for redundant data.

Taking the first example, let’s set the scene by looking at some naive code about how we might implement an autocomplete. For the purpose of this article we will be using the more modern fetch rather than XMLHttpRequest for making a network request. Here’s the code:

    autocompleteInput.addEventListener('keydown', function() {

        const url = "https://api.example.com/autocomplete"

            .then((response) => {
                // Do something with the response
            .catch((error) => {
                // Something went wrong


The problem in this case is that each one of these requests will complete, even if it is no longer relevant. We could implement some extra logic in the updateAutocompeleteMenu to prevent unnecessary code execution but this won’t actually stop the request. It’s also worth noting here that browsers have a limit of outgoing requests which means that they queue requests once that limit is hit (although that limit varies by browser).

Abortable Fetch

A new browser technology that we can leverage to solve the aforementioned issue is Abortable Fetch. Abortable fetch relies on a browser specification for AbortController. The controller has a property called signal which we can pass to our fetch as an option (also named signal), and then use this at our later convenience to cancel the request with the controllers abort method.

An example might look a little like this:

    const url = "https://api.example.com/autocomplete"
    let controller;
    let signal;

    autocompleteInput.addEventListener('keyup', () => {

        if (controller !== undefined) {
            // Cancel the previous request

        // Feature detect
        if ("AbortController" in window) {
            controller = new AbortController;
            signal = controller.signal;

        // Pass the signal to the fetch request
        fetch(url, {signal})
            .then((response) => {
                // Do something with the response
            .catch((error) => {
                // Something went wrong

Here we do feature detection to determine if we can use AbortController (it’s supported in Edge, Firefox, Opera and coming in Chrome 66!). We also determine if a controller has already been created, and if so we call controller.abort() which will cancel the previous request. You can also use the same signal in multiple fetches to cancel multiple fetches at once.

A little demo

I’ve created a small demo showing how to use Abortable Fetch, loosely based on the idea of the autocomplete idea (without any of the implementation details!). What happens is every time you type it makes a network request. If you make a new keystroke before the old request has completed it will abort the previous fetch. It looks a little something like this in practice:

You can check the code out here.

Thinking beyond fetch

Perhaps the coolest part about AbortController is it has been designed to be a generic mechanism for aborting asynchronous tasks. It is part of the WHATWG specification, meaning it is DOM specification rather than a language (ECMAScript) specification, but for frontend development this is still a useful feature. You could leverage it as a cleaner async control flow mechanism for times you implement asynchronous tasks (i.e. when using Promises). Feel free to take a look at Bram Van Damme super article for a more detailed example of what I’m talking about.

18 Feb 2018, 17:56

Easier Web Workers

Ever been on a web page and everything feels a bit slow? Delays typing, scrolling, and general interactions with the page? One of the main causes of this is ‘blocking the main thread’. Browsers do their best to keep the rendered contents of a page in sync with the refresh rate of a monitor (generally this is about 60 frames per second). However doing expensive operations in your main thread (i.e. where your everyday JavaScript is executed) has the potential to block it, preventing efficient page rendering and in turn delaying response to user interactivity such as scrolling, inputs, etc.

Thankfully due to the power of Web Workers, we can offload heavy computations to another thread, leaving the main thread for handling rendering and user interactions. Web Worker’s run a JavaScript file as a background thread that that runs as a separate context to the browsers main thread. So how do we construct a worker? Like so:

    const worker = new Worker('worker.js');

Here worker.js is the code that will listen for the message from the workers and perform the specified work.

Workers are pretty flexible, but one core thing you can’t do is access and manipulate the DOM. They also require you to pass data to them and the data is not shared, unless you’re using Transferables. You can natively pass any data that is allowed in the Structured Clone Algorithm to a worker. In practice this means most things minus Functions, Errors and DOM Elements. Here JSON.stringify may bring some performance benefits, although that’s worth testing for your use case first. It is worth mentioning that JSON.stringify also has various types that do not convert including functions, Date objects, regular expressions and undefined.

Since data is not shared, there is a performance overhead copying data to the worker. The exception here is previosuly mentioned Transferables which are ‘zero-copy’ meaning data is transferred to the thread context instead. This can be an order of magnitude faster than copying.

There is a cost to instantiating a Web Worker which will vary from browser and device, but this Mozilla article articulates that you’re looking around the 40ms mark. Communicating over to a Web Worker (postMessage) is fast however, around 0.5ms of latency.

Passing Messages

So what does a the code look like for passing data (a message) to and from a Web Worker look like?

    // In our main JavaScript file

    // Post data
    worker.postMessage("Hello from the main thread!");

    // Receive data
    worker.addEventListener('message', (event) => {
        console.log("Data from worker received: ", event.data);
    }, false);

And then in the Web Worker (say webworker.js) we need a way to receive the message:

    self.addEventListener('message', (event) => {
        console.log("Worker data received from the main thread", event.data);
        // Do what we want with do something with event.data
            `Hello from the Web Worker thread!
             The message received had length: ${event.data.length}`
    }, false);

Here we can see that once the message is received we can manipulate the incoming data as we see fit and send it back with ‘postMessage`.

A simple Web Worker example

To give a more tangible example, I have created an example repository which shows how we can produce large numbers of primes in a Worker whilst maintaining interactivity with the page.

Are there any nice abstraction libraries?

Yes! I have compiled a list of Hello World examples using various popular libraries. Namely:

  • Greenlet - Turn async functions into Web Workers
  • Comlink - Modern abstraction of Web Workers
  • Operative - Simpler callback oriented workers

You can see all of those examples in my GitHub repo here. There are others that might be worth checking out depending on your use case that I haven’t added.

  • promise-worker by Nolan Lawson for simpler promise based workers.
  • Workerize by Jason Miller which is the module level version of greenlet
  • Clooney by Surma; a actor library which builds upon Comlink.

Let’s take a little look at how Greenlet might work. Using ES7 async/await syntax, we get readable code, without sacrificing on functionality. Under the hood greenlet does something pretty cool, it generates an inline Web Worker using URL.createObjectURL and Blob. This allows us to do like so:

    const asyncSieveOfEratosthenes = greenlet(async (limit) => {
        // Code redacted for brevity

    const calculate = document.getElementById("calculatePrimes");
    const message = document.getElementById("showPrimes")

    calculate.addEventListener("click", async () => {
        const n = 100000000;
        message.innerHTML = "Main thread not blocked!";
        // The following async function won't block:
        const totalPrimes = await asyncSieveOfEratosthenes(n);
        calculate.innerText = "Done!"
        message.innerHTML = `${totalPrimes.length} prime numbers calculated!`;

Pretty cool if you ask me!

What about support?

Web Workers are very well supported by all major browsers, so this shouldn’t be an issue:

When to use Web Workers?

Some people may be tempted to try and start moving all there app logic over to a Web Worker. There is no guarantee that this will be any more performant. Web Workers make the most sense when you have heavy processing that would block the main thread and rendering and user interaction. For example, imagine you want to do some intensive number crunching, geometry processing (see for example Turf.js) or deep tree traversal and manipulation. The most useful piece of advice I can give here is profile and benchmark it. If you’re new to profiling, check out this piece on CPU profiling in Chrome.


I am currently working on a library called Fibrelite which is based off of Jason Millers fantastic greenlet library. The aim is to produce a general purpose library for spinning out async functions as Web Workers, but with a variety of approaches to handling those function calls, for example pooling, prioritising calls or debouncing calls where necessary. This would be beneficial for any situation where both user interactions and intensive calculations are in tandem. I will write a more detailed blog post at a later date, in the mean time, check out a demo here.