04 Jul 2018, 20:37

Service Worker State Management

With this post I want to build upon a previous technique I wrote about; message passing with Service Workers. Here I will be looking at how you might integrate this concept to manage application state across tabs for a site. For those of you unfamiliar, Service Workers are a separate thread that sit at in the background at browser level (rather than the page level like Web Workers) allowing pages and workers on the same domain scope (i.e. example.com) to interact with it. This makes them a great host for doing things such as intercepting and caching requests, handling offline requests, triaging push notifications and managing background syncing. This is even more true now that they are supported in all modern browsers!

Off the Beaten Track with Service Workers

As well as these more run-of-the-mill Service Worker patterns, there are also some more experimental ideas, like on the fly WebP support, or caching from a ZIP file. It’s definitely cool that Service Workers enable this interesting applications. Previously I wrote about passing messages between tabs using a Service Worker, inspired by some tweets and in turn a blog post by Craig Russell. I also recently realised Arnelle Balane wrote some similar ideas, albeit a slight different approach, which are worth a read.

In this post I want to take this further by exploring the idea of state management in a Service Worker. Although many of you might be versed in writing complex applications, I wanted to run through state management at high level before we kick off. For the TL;DR skip to the Mixing State Management with Service Workers section.

State Management

We can think of state management as the remits of how we organise and update the state of our application. For example if our web application was a simple counter, state management would entail concepts like:

  • What is it’s default value?
  • Where do we store it’s value?
  • How do we update the counter?
  • In which ways can the counter increment/decrement?

You can see how for even a arguably straightforward application such as a counter the cognitive load we have to endure as developers can quickly stack up. This is where state management libraries come in.

State management libraries are tools that allow you to centralise you state management, allowing updates to your data to become more predictable as they are funnelled through very specific and narrow channels. This in theory reduce bugs, decrease cognitive load, and in turn make it quicker and easier to scale a web application. With this being said, although state management can simplify scaling complex applications, they may actually make smaller applications more complicated than necessary (see this great post from Dan Abrimov for a deeper insight on that). Many state management libraries are based (or loosely based on the concepts of), Facebook’s Flux pattern. Of these Redux is perhaps the most popular. Another popular state management library is MobX which takes a more reactive/observer based approach to state management.

Mixing State Management with Service Workers

Now for the interesting part; putting state management in a Service Worker. Because Service Workers exist outside of a page and/or worker context for a given domain scope, we can use them to pass data to each other. So what if we took this a step further and stored the apps state in Service Worker? So I know what you’re potentially thinking, which is ‘is this a good idea?’ and in honesty I’m not even sure, but it’s definitely fun and foreseeably useful in specific cases.

Initially I tried to use Redux as the demonstration state manager, however I hit a hurdle. My proof-of-concept appeared to work great in Chrome, but in Firefox it would fail when changing between tabs. What was going on? As it currently stands (June 2018) it looks like Firefox kills off idle Service Workers after 30 seconds, although from my experimenting it seems actually less than that. This means when the tab is idle for a certain period, the script is re-executed when a new message is sent to the worker. State is wiped during this process, making it a none viable approach. There is some potential Chrome might be doing this in the future.

So, what to do? One suggestion in the above issue suggests is sending some sort of message on a timer to keep the Service Worker alive. I’m not a massive fan of this approach though as it feels a bit flakey and in general think timers should be avoided where possible. So what else can we do? Jeff Posnick recommends using IndexDB for persisting state, which got me looking into IndexedDB backed Redux libraries. I came across another Redux library called redux-persist. However this didn’t work out, as the state didn’t seem to persist the data in a way that was conducive to syncing state in the way I wanted. So instead, I rolled my own state library based on idb by Jake Archibald.

The Web Page

Let’s start with the web page first, let’s assume we are building our counter application and we have a standard HTML page. Next we’re going to want to register our Service Worker (let’s assume it’s wrapped in a ('serviceWorker' in navigator):


	navigator.serviceWorker.register('serviceworker.js')
		.then((reg) => {

			// Here we add the event listener for receiving messages
			navigator.serviceWorker.addEventListener('message', function(event){
				// Some function that renders the state to the page
				render(event.data.state.count);
			});

			messageServiceWorker({ GET_STATE: true});

		}).catch(function(error) {
			console.error('Service Worker registration error : ', error);
		});

	// When a new SW registration becomes available
	navigator.serviceWorker.oncontrollerchange = function() {
		messageServiceWorker({ GET_STATE: true});
	}

Here we are going to do something interesting; we’re going to tell the page that when it closes, we want to fire an event to the Service Worker letting it know that tab died. Because Service Workers can exist even when the page isn’t open, we need to a way to reset the state when no tabs are open. Again lets assume we use feature detection for the Service Worker:


	// Event on tab/window closed, so we can then check for no tabs/window.
	// If we wanted we could make this false to permanently persist state
	if (RESET_ON_NO_CLIENTS) {
		window.onunload = function() {
			// postMessage should be synchronous in this context?
			navigator.serviceWorker.controller.postMessage({
				TAB_KILLED: true
			});
		};
	}

We’ll also need a way to post our actions to our Service Worker so the Redux store and dispatch them, so lets add that:


	// Send generic messages to the Service Worker
	function messageServiceWorker(data){
		if (navigator.serviceWorker && navigator.serviceWorker.controller) {
			navigator.serviceWorker.controller.postMessage(data);
		}
	}

	// Pass actions specifically to the Service Worker
	function actionToServiceWorker(action) {
		messageServiceWorker({ ACTION: action })
	}

Let’s also say for the sake of simplicity that we only want to increment the counter, we could do it like this:


    document.getElementById('increment')
		.addEventListener('click', function () {
			actionToServiceWorker('INCREMENT');
		});

The Service Worker

A Service Worker exists as a single file, although may import others with the importScripts function. Let’s setup a Service Worker that can handle our state changes are persist them. Because Service Workers are only supported in modern browsers, I’ve written these in ES6 syntax. Firstly lets handle the incoming messages to the worker:


	initialiseOnMessage() {
		if (!self) {
			console.error("Self undefined, are you sure this is a worker context?");
			return;
		}
		self.onmessage = (message) => {
			if (message.data.GET_STATE) {
				this.store.getState().then((state) => {
					this.syncTabState(state);
				});
			} else if (message.data.TAB_KILLED) {
				this.checkIfAllTabsKilled(actions.RESET)
			} else if (message.data.ACTION) {
				this.dispatchToStore(message.data.ACTION)
			}
		}
	}

Next lets handle syncing state to the tabs. Here we need to be able to be able to dispatch events to our store, sync that store with new state, and also reset that store when all the tabs have been closed. Let’s see how we can do that:


		// Get all the tabs for the current domain scope
		getTabs() {
			return self.clients.claim().then(() => {
				return clients.matchAll(
					{
						includeUncontrolled: true,
						type: "window"
					}
				);
			})
		}


		// Dispatch a store event and sync that back to the tabs
		dispatchToStore(action, clientId) {
			this.store.dispatch(action).then(() => {
				this.store.getState().then((state) => {
					this.syncTabState(state);
				})
			})
		}

		// Check if all the tabs have died and if so reset the state
		checkIfAllTabsKilled(RESET) {

			this.getTabs().then((clients) => {

				// Sometimes the new client exists before we can check if there
				// are no tabs open at all. Essentially we need to handle the refresh case
				const isRefresh = clients.length === 1 && this.lastKnownNumClients < 2;
				const shouldReset = clients.length === 0 || isRefresh;

				if (shouldReset) {
					// Reset state back to normal
					this.store.dispatch(RESET);
				}

				this.lastKnownNumClients = clients.length;

			});

		}

		// Sync the state back to all the available tabs and windows
		syncTabState(newState) {

			this.getTabs().then((clients) => {
				// Loop over all available clients
				clients.forEach((client) => {
					const data = { state: newState }
					client.postMessage(data);
				});

				this.lastKnownNumClients = clients.length;

			});

		}

This code misses out the logic for actually updating our IndexedDB store, but under the hood it’s a mix of a Redux-esque pattern and the idb library I mentioned for persisting that store. The state will only update if the persistence part is successful. You can find the full code for the store logic in the GitHub link below.

Pulling it All Together

Now I’ve explained the page and Service Worker parts, let’s see how it looks in practice! You can find a link to a live demo here, and a link to the full code here.

Conclusion

Is it possible to put your state management in a Service Worker? Totally, if you’re willing to persist state. Is it sensible? I’m not entirely sure. Some obvious shortcomings are that you’ll have to write a fallback for none SW support browsers, you can’t use it in incognito in Firefox and it’s going to increase the complexity of the app with the message passing / asynchronosity aspect. Also in theory there’s more points of failure as you’re introducing a Service Worker and IndexedDB into the mix. This being said, if having tabs in sync is a critical part of your application, this may be a reasonable approach to solving that specific problem. Another way might be to just broadcast the actions to all other pages which, in theory should keep them in sync whilst keeping the Service Worker stateless.

23 Apr 2018, 19:37

The Rise of JavaScript Scheduling

What is scheduling?

At a high level, scheduling can be thought of as a way of splitting up work and allocating it to be completed by a compute resource. The work can be processed at some point in the future, in an order, and by a given resource that the scheduler specifies.

For example, if we had a set of tasks of equal importance, lets say calculating billing and distributing invoices for users of your Software as a Service platform, we could use a scheduler to distribute that work in a way that was evenly distributing processing time to each task. If the tasks were of differing importance, we could allow the scheduler to prioritise work, ensuring more processing time was allocated to performing higher prioritised tasks. For instance, you might prioritise the work loads of paying customers over free users.

At a more granular level there are many different approaches of varying complexity to scheduling work. Some examples of mainstream scheduling strategies include: first in first out, earliest deadline first and round robin. The wiki for this is actually really strong, so I’ll defer to that for an in depth explanation.

Why schedule work in JavaScript?

Let us consider why a scheduler may be of value in JavaScript. In web development we have the case that:

“By default the browser uses a single thread to run all the JavaScript in your page as well as to perform layout, reflows, and garbage collection. This means that long-running JavaScript functions can block the thread, leading to an unresponsive page and a bad user experience”. - MDN

There are ways to offload work onto other threads via Web Workers, but these have limitations such as not being able to access the DOM, and having to copy data from the main thread and back again (RIP SharedArrayBuffers). In some ways the single threaded-ness of JavaScript is useful; we would have a complex overhead of managing multiple race conditions of threads trying to access the DOM.

Fundamentally, the JavaScript thread of the browser works by way of the event loop, which cycles round executing queued work to be performed. At a high level we have three major types of work that the the event loop processes:

  • Tasks - Event handlers, setTimeouts, setInterval, etc
  • Microtasks - MutationObserver callbacks, Promises - these get executed whenever the JavaScript call stack empties
  • Rendering steps - requestAnimationFrame queued work, Style, Layout, Paint

Knowing this helps us write and/or understand a well planned scheduler in conjunction with our own code. We can break up long running tasks into smaller tasks and interleave them with other work, for example performing layouts and repaints in an efficient manner.

The specifics of the event loop are much better left to others; specifically I would recommend Jake Archibald’s talk at JSConf Asia 2018 which is a superb elucidation on the subject (he also has a blog post).

User experience and scheduling

A recurring problematic theme in web applications is the the idea of jank; low frame rates and interactivity for end users. Having talked a bit about the event loop, how might we leverage that understanding to better improve our sites user experience? One prime example is Wilson Page’s fastdom which was one of the first schedulers I came across in late 2013. The core premise is that it’s possible to batch up DOM reads and writes, and then schedule them using requestAnimationFrame for noticeably smoother animations. requestAnimationFrame allows developers to schedule DOM updates right before the browser performs the next render cycle (style, layout, paint, composite). This prevents work being done mid frame causing it to miss the frame as shown in the following diagram (thanks Google). Clever stuff!

This approach can noticeably improve user experience for DOM heavy sites. However, fastdom as a library is predominately about preventing layout thrashing, and generic work scheduling is outside of its scope. Furthermore,requestAnimationFrame is arguably not an appropriate tool in and of itself to handle generic work; each request will run in it’s intended frame and does not distribute them across multiple frames.

Adoption by frameworks

Within the past few of years we’ve seen an increased interest in scheduling work by frameworks. The single threaded nature of JavaScript, and the plethora of tasks that need to be completed to allow users to navigate a modern web application pose an interesting challenge - especially for framework developers. We may wish to be animating elements whilst also accepting user input, and sending off that input to a server. There is a possibility that user input and associated handling could cause our frames to take too long to render, resulting in a rough user experience.

Arguably one of the earlier stage examples of scheduling is the AngularJS (1) digest cycle. AngularJS came out in 2010, and it had its own built in event cycle (a scheduler of sorts) for handling its notorious two way data-binding system. An overview diagram can be seen below. The digest cycle checks for changes between the view data model and the DOM, and then re-renders after the cycle to reflect those changes. Certain elements of the cycle, by the documentations own omission, could be considered problematic, for example setTimeout(0) can be janky as the browser repaints after every event.

Angular, AngularJS’s successor has a different approach to change detection and updates which can be read about here. It also does some interesting things with asynchronous execution contexts (zones), allowing for a smarter way of doing operations like updating the DOM, error handling and debugging.

Vue.js takes the approach of batching DOM updates asynchronously. There is no ‘digest cycle’ as per AngularJS as Vue.js encourages a data driven approach. Here the batched operations are then flushed out to the DOM on a given tick cycle. The queue internally uses Promise.then and MessageChannel, with a fall back to setTimeout(fn, 0) if those aren’t available.

Other frameworks have explored a holistic view of how to handle the stream of user interactions, network requests, data flow and rendering. A prime and recent example of this is React. React Fiber, fully implemented in React 16, uses scheduling to improve perceived performance and responsiveness in complex web applications). In the world of the web not all computational work may be of equal importance to a user. For example typing and receiving immediate feedback may be a more critical interaction than having a dashboard receiving external data and updating instantaneously. The React team has done a lot work to it’s reconciliation algorithm to prioritise work in a way that is conducive to a pleasant user experience. This is done fundamentally by scheduling different updates at different priorities. Traditionally, all updates were treated synchronously, with no prioritisation. React’s reconciliation phase was uninterruptible, which could lead to low framerates for complex work loads. Here is an example of the Chrome profiler showing it taking ~750ms to render a frame (the green bar):

The way in which React Fiber attempts to keep consistent framerates is via what they have dubbed time slicing. The process uses requestIdleCallback to defer low priority work to the browser’s idle times. Fiber also estimates the number of milliseconds of time remaining in the current idle stage, and when this elapses stops work to give the browser time to render a frame. For deeper explanation, check out Giamir Buoncristiani’s post, the React Fiber Architecture README by Andrew Clark and also Dan Abramov’s talk at this years Iceland JS Conf. In Dan’s demonstration you can clearly see the difference between the synchronous and asynchronous work patterns:



Another project that has (very recently) been leveraging scheduling is StencilJS. For those of you who aren’t familiar, StencilJS is a modern web framework that takes popular features from many recent frameworks, and combines this with compilation down to native Web Components. Manu Mtz-Almeida of the StencilJS team has recently been pushing commits for scheduling into the core framework, demoing some of his recent work on Twitter:



You can see that the end goal is similar here to what React is doing with Fiber; try and keep the browser rendering at 60fps for a smooth user experience, whilst still completing non-rendering work in reasonable time-frames. Hopefully we’ll be seeing more from the Stencil team in the future!

The building blocks of scheduling in the browser

There are many ways we could build a scheduler in JavaScript. Traditionally we might have implemented it using JavaScript and browser APIs such as:

  • setTimeout - Execute a given function at some point in the future
  • setInterval - Execute a given function on some recurring schedule (every x milliseconds)
  • Promises - An object representing the eventual success or failure of asynchronous operation

With ever improving browser standards we also have some interesting additional browser APIs that might help us to write smarter schedulers:

  • requestIdleCallback - Schedule work to be done during the browsers idle periods. Supports deadlines.
  • performance.now - Granular timing API for modern browsers
  • async/await - Allows developers to write easier and cleaner asynchronous code, making it feel more synchronous

As well these, there is the previously mentioned requestAnimationFrame for visual changes. Interestingly React was originally relying on requestIdleCallback for Fiber, but now they’ve written their own polyfill for this (at time of writing). Indeed there is no single way to write a scheduler, and you could use a myriad of these features to create one. Getting this right appears to be a relatively tricky endeavour; in the words of Bertalan Mikolos “timing is a delicate thing, and slight mistakes can cause some very strange bugs.”.

Personally, the strongest example so far I’ve seen of a generic purpose scheduler is Justin Fagnani’s queue-scheduler a framework agnostic JavaScript scheduler. Here he uses async/await performance.now(), requestIdleCallback and requestAnimationFrame to allow developers to schedule work. It’s worth examining the source code to see how these are used (FYI: it’s written in TypeScript).

Final thoughts

Overtime we have seen a more progressive approach towards scheduling. Modern schedulers such as those in React and StencilJS have been written in a way that keeps end users at their heart, keeping frame rates and interactivity high. It is fair to say that with React (arguably the most popular JavaScript framework in modern applications) having taken scheduling to the core of it’s architecture, we have seen scheduling become mainstream for web developers. We also see API compatible libraries such as Preact looking to follow suit.

With teams like StencilJS following suit with their user centric scheduler, there is strong evidence to suggest that smarter scheduling may become commonplace across many approaches to building web applications. I haven’t seen much work done with Web Workers and scheduling, but feel this could be a strong contender for future work as inline Web Worker libraries have become more popular and Web Workers are non blocking on the main thread. I think this is especially true for long running tasks, see for example a little demo I did making a library called Fibrelite for offloading processing to an inline Web Worker.

03 Apr 2018, 19:37

Cancelling Requests with Abortable Fetch

There are often times in a web application where you need to send a request for the latest user input or interaction. Some examples might be a autocomplete or zooming in and out a map. Let’s think about each of these examples for a moment. Firstly autocomplete; every time we type (or maybe less if we were to debounce) we might send out a request. If the user input changes the old requests might become irrelevant as we keep typing (i.e. ‘java’ and ‘javascript’). That’s potentially a lot of redundant requests before we get to what we’re interested in!

Now the web map case; we’re zooming and panning around the map. As we zoom in and out, we are no longer interested in the tiles from the previous zoom levels. Again, lots of requests might be pending for redundant data.

Taking the first example, let’s set the scene by looking at some naive code about how we might implement an autocomplete. For the purpose of this article we will be using the more modern fetch rather than XMLHttpRequest for making a network request. Here’s the code:


    autocompleteInput.addEventListener('keydown', function() {

        const url = "https://api.example.com/autocomplete"

        fetch(url)
            .then((response) => {
                // Do something with the response
                updateAutocompleteMenu()
            })
            .catch((error) => {
                // Something went wrong
                handleAutocompleteError(error);
            })

    });


The problem in this case is that each one of these requests will complete, even if it is no longer relevant. We could implement some extra logic in the updateAutocompeleteMenu to prevent unnecessary code execution but this won’t actually stop the request. It’s also worth noting here that browsers have a limit of outgoing requests which means that they queue requests once that limit is hit (although that limit varies by browser).

Abortable Fetch

A new browser technology that we can leverage to solve the aforementioned issue is Abortable Fetch. Abortable fetch relies on a browser specification for AbortController. The controller has a property called signal which we can pass to our fetch as an option (also named signal), and then use this at our later convenience to cancel the request with the controllers abort method.

An example might look a little like this:


    const url = "https://api.example.com/autocomplete"
    let controller;
    let signal;

    autocompleteInput.addEventListener('keyup', () => {

        if (controller !== undefined) {
            // Cancel the previous request
            controller.abort();
        }

        // Feature detect
        if ("AbortController" in window) {
            controller = new AbortController;
            signal = controller.signal;
        }

        // Pass the signal to the fetch request
        fetch(url, {signal})
            .then((response) => {
                // Do something with the response
                updateAutocompleteMenu()
            })
            .catch((error) => {
                // Something went wrong
                handleAutocompleteError(error);
            })
        });
    
    });

Here we do feature detection to determine if we can use AbortController (it’s supported in Edge, Firefox, Opera and coming in Chrome 66!). We also determine if a controller has already been created, and if so we call controller.abort() which will cancel the previous request. You can also use the same signal in multiple fetches to cancel multiple fetches at once.

A little demo

I’ve created a small demo showing how to use Abortable Fetch, loosely based on the idea of the autocomplete idea (without any of the implementation details!). What happens is every time you type it makes a network request. If you make a new keystroke before the old request has completed it will abort the previous fetch. It looks a little something like this in practice:

You can check the code out here.

Thinking beyond fetch

Perhaps the coolest part about AbortController is it has been designed to be a generic mechanism for aborting asynchronous tasks. It is part of the WHATWG specification, meaning it is DOM specification rather than a language (ECMAScript) specification, but for frontend development this is still a useful feature. You could leverage it as a cleaner async control flow mechanism for times you implement asynchronous tasks (i.e. when using Promises). Feel free to take a look at Bram Van Damme super article for a more detailed example of what I’m talking about.