I’m sure you can come up with some additions to your service worker script. Take a second look at the image logic, for example:
That doesn’t take into account the worst-case scenario: What if the image can’t be retrieved from the cache or the network? Take a leaf out of the strategy you’re using for HTML—you could add one final conditional step to your image-handling logic:
For this to work, you’d need to update your install event code. The fallback image would need to be included in your static assets, just like your offline page:
staticCache.addAll([
'/path/to/stylesheet.css',
'/path/to/javascript.js',
'/offline.html',
'/fallback.svg'
]);
Don’t forget to update your version variable too:
const version = 'V0.05';
Now you can add a catch clause to the part of your image-handling code where you try fetching from the network:
.catch( error => {
return caches.match('/fallback.svg');
})
Here’s how your updated image-handling code looks:
// When the user requests an image
if (request.headers.get('Accept').includes('image')) {
fetchEvent.respondWith(
// Look for a cached version of the image
caches.match(request)
.then( responseFromCache => {
if (responseFromCache) {
return responseFromCache;
} // end if
// Otherwise fetch the image from the network
return fetch(request)
.then( responseFromFetch => {
// Put a copy in the cache
const copy = responseFromFetch.clone();
fetchEvent.waitUntil(
caches.open(imageCacheName)
.then( imageCache => {
return imageCache.put(request, copy);
}) // end open then
); // end waitUntil
return responseFromFetch;
}) // end fetch then
.catch( error => {
// Otherwise show a fallback image
return caches.match('/fallback.svg');
}); // end fetch catch and return
}) // end match then
); // end respondWith
return; // Go no further
} // end if
To recap, here’s your updated logic for images:
I spot another opportunity to update the logic for your images. The current logic is working great, but the image cache never gets fresh copies of images—they’re only added to the cache the first time they’re fetched. You could expand the logic to keep the cache updated regardless:
Here’s the code where you carry out the first two steps:
// When the user requests an image
if (request.headers.get('Accept').includes('image')) {
fetchEvent.respondWith(
// Look for a cached version of the image
caches.match(request)
.then( responseFromCache => {
if (responseFromCache) {
return responseFromCache;
}
You can update that if block to include the new extra steps:
if (responseFromCache) {
// Fetch a fresh version from the network
fetchEvent.waitUntil(
fetch(request)
.then (responseFromFetch => {
// Update the cache
caches.open(imageCacheName)
.then( imageCache => {
return imageCache.put(request, responseFromFetch);
}); // end open then
}) // end fetch then
); // end waitUntil
return responseFromCache;
} // end if
Now your cache of images won’t ever get too stale.
I think we’ve covered some good ways of optimizing our fetch-handling code for images. Now let’s look at handling web pages.
The current logic for your HTML pages is fairly straightforward. There are only two possibilities: either the user gets the page they want directly from the network, or they get a fallback page:
But suppose you had a separate cache just for pages. Then you could introduce an intermediate step to your logic:
You’ll need to make a new cache for pages. Like the images cache, this one doesn’t need to be versioned:
const version = 'V0.05';
const staticCacheName = version + 'staticfiles';
const imageCacheName = 'images';
const pagesCacheName = 'pages';
Then update your list of valid cache names:
const cacheList = [
staticCacheName,
imageCacheName,
pagesCacheName
];
You could prepopulate that new cache during the install event. But remember, that event only fires once. Any files you put in a cache at that point will remain unchanged. That’s great for static files like CSS, JavaScript, and fonts, but it’s not ideal for web pages that are updated frequently.
Instead, you could repeat what you’re doing with images, and populate the cache as you go. Every time the user visits a page, put a copy of that page in the cache:
You’re still treating images and pages differently—for images, you look in the cache first; for pages, you try the network first. In both cases you’re building up a bigger and bigger cache as the user explores your site.
The code for dealing with pages remains the same to begin with:
// When the user requests an HTML file
if (request.headers.get('Accept').includes('text/html')) {
fetchEvent.respondWith(
// Fetch that page from the network
fetch(request)
Now you can introduce a then clause to put a copy of the response into the cache:
.then( responseFromFetch => {
// Put a copy in the cache
const copy = responseFromFetch.clone();
fetchEvent.waitUntil(
caches.open(pagesCacheName)
.then( pagesCache => {
return pagesCache.put(request, copy);
})
);
return responseFromFetch;
})
With that code in place, your site’s visitors will build up a cache of pages as they travel around your site. If they lose their network connection, you can try showing them a cached version of the page they’re requesting. As long as they’ve visited it at least once before, the page should be in the cache.
You can use the catch clause to search your caches:
.catch( error => {
return caches.match(request);
})
Finally, if all else fails, serve up the fallback page. You’ll need to expand your catch clause to find out whether the match returned a meaningful response. If the response was empty, grab the fallback page from your static cache:
.catch( error => {
return caches.match(request)
.then( responseFromCache => {
if (responseFromCache) {
return responseFromCache;
}
return caches.match('/offline.html');
});
})
Putting all that together, here’s your updated code for handling pages:
// When the user requests an HTML file
if (request.headers.get('Accept').includes('text/html')) {
fetchEvent.respondWith(
// Fetch that page from the network
fetch(request)
.then( responseFromFetch => {
// Put a copy in the cache
const copy = responseFromFetch.clone();
fetchEvent.waitUntil(
caches.open(pagesCacheName)
.then( pagesCache => {
return pagesCache.put(request, copy);
}) // end open then
); // end waitUntil
return responseFromFetch;
}) // end fetch then
.catch( error => {
// Otherwise look for a cached version of the page
return caches.match(request)
.then( responseFromCache => {
if (responseFromCache) {
return responseFromCache;
} // end if
// Otherwise show the fallback page
return caches.match('/offline.html');
}); // end match then and return
}) // end fetch catch
); // end respondWith
return; // Go no further
} // end if
And with that, you’ve created a really nice offline experience. If someone is browsing your site, they might lose their internet connection and never even notice—they’ll still be able to view any pages they previously visited.
So far, your fetch-handling logic has been based on file types: HTML, images, and everything else. If you wanted, you could apply different logic depending on other factors, like which part of your site is being requested.
Here’s a fairly typical example: Let’s say you’ve got a site that publishes articles. Those articles might appear under a particular URL like /posts/ or /articles/. If the content of those articles rarely changes after publication, you might as well try serving them from the cache instead of the network. That way, the user will get a really speedy response.
You can still choose to update the cache with a fresh copy of the page. Then the next time the user visits that page, they’ll get a fresher version. The version they get from the cache will be slightly out of date—it will be one version behind—but if the changes are likely to be minor corrections, the slightly stale nature of the response is a small tradeoff for the super-speedy response time.
You probably wouldn’t want to serve up a cached version of your homepage, where content freshness is a priority. That’s fine—you can write different code for different scenarios. Instead of only looking at the file type, you can also look at the URL being requested.
Here’s how you’re starting your fetch-handling code:
addEventListener('fetch', fetchEvent => {
const request = fetchEvent.request;
That request object has a property called url. You can use this to look for specific strings of text, like /products/ or /articles/:
if (request.url.includes('/articles/')) {
// Logic for article pages goes here
return;
}
If you need more fine-grained control in that if statement, you can use a regular expression with the test method:
if (/\/articles\/.+/.test(request.url)) {
// Now you've got two problems
return;
}
That’s looking for the string /articles/ followed by at least one other character…I think. Regular expressions are my kryptonite.
However you decide to do it, being able to apply different logic to different URL patterns opens up a whole world of possibilities.
Here’s the logic you might apply for article pages if you want to prioritize speed over freshness:
Here we go:
// When the requested page is an article
if (/\\/articles\\/.+/.test(request.url)) {
fetchEvent.respondWith(
Start by looking for a match from the cache:
// Look in the cache
caches.match(request)
.then( responseFromCache => {
if (responseFromCache) {
Before sending back the response from the cache, use waitUntil to fetch a fresh version in the background:
// Fetch a fresh version from the network
fetchEvent.waitUntil(
fetch(request)
When we get a fresh copy, put it in the cache:
.then( responseFromFetch => {
// Update the cache
caches.open(pagesCacheName)
.then( pagesCache => {
return pagesCache.put(request, responseFromFetch);
});
})
Finally, don’t forget to send back the response from the cache:
return responseFromCache;
Putting those steps together, you get this:
// Look in the cache
caches.match(request)
.then( responseFromCache => {
if (responseFromCache) {
// Fetch a fresh version from the network
fetchEvent.waitUntil(
fetch(request)
.then( responseFromFetch => {
// Update the cache
caches.open(pagesCacheName)
.then( pagesCache => {
return pagesCache.put(request, responseFromFetch);
}); // end open then
}) // end fetch then
}; // end waitUntil
return responseFromCache;
} // end if
The next part—“otherwise fetch the page from the network”—follows the familiar pattern:
// Otherwise fetch the page from the network
return fetch(request);
But it needs to be expanded for the additional step—“and put a copy in the cache”:
// Otherwise fetch the page from the network
return fetch(request)
.then( responseFromFetch => {
// Put a copy in the cache
const copy = responseFromFetch.clone();
fetchEvent.waitUntil(
caches.open(pagesCacheName)
.then( pagesCache => {
return pagesCache.put(request, copy);
})
);
return responseFromFetch;
})
Finally there’s the last resort—“otherwise show the fallback page”:
// Otherwise show the fallback page
.catch( error => {
return caches.match('/offline.html');
});
Putting it all together, you get this:
// When the requested page is an article
if (/\/articles\/.+/.test(request.url)) {
fetchEvent.respondWith(
// Look in the cache
caches.match(request)
.then( responseFromCache => {
if (responseFromCache) {
// Fetch a fresh version from the network
fetchEvent.waitUntil(
fetch(request)
.then( responseFromFetch => {
// Update the cache
caches.open(pagesCacheName)
.then( pagesCache => {
return pagesCache.put(request, responseFromFetch);
}); // end open then
}) // end fetch then
); // end waitUntil
return responseFromCache;
} // end if
// Otherwise fetch the page from the network
return fetch(request)
.then( responseFromFetch => {
// Put a copy in the cache
const copy = responseFromFetch.clone();
fetchEvent.waitUntil(
caches.open(pagesCacheName)
.then( pagesCache => {
return pagesCache.put(request, copy);
}) // end open then
); // end waitUntil
return responseFromFetch;
}) // end fetch then
.catch( error => {
// Otherwise show the fallback page
return caches.match('/offline.html');
}); // end fetch catch and return
}) // end match then
); // end respondWith
return; // Go no further
} // end if
That’s a hefty chunk of code! You can put all of it right inside the if statement that checks for HTML requests:
// When the user requests an HTML file
if (request.headers.get('Accept').includes('text/html')) {
// When the requested page is an article
if (/\/articles\/.+/.test(request.url)) {
// Look in the cache
// Fetch a fresh version from the network
// Update the cache
// Otherwise fetch the page from the network
// Put a copy in the cache
// Otherwise show the fallback page
return;
}
// Otherwise fetch the page from the network
// Put a copy in the cache
// Otherwise look in the cache
// Otherwise show the fallback page
return;
}
That gives you different priorities for different kinds of pages. For articles, try the cache first. For other pages, try the network first.
Your code is getting quite long. It’s daunting to have so much JavaScript. That’s why I find comments in the code so helpful—they help me keep track of what’s going on where.
Even though you have many lines of code, the overall structure of that code is made up of repeating patterns:
Those are the building blocks, and, just like pieces of LEGO, they can be arranged into an almost infinite variety of configurations.
The logic for article pages and images is a particularly powerful pattern. Because you’re looking in the cache first before trying the network, a returning visitor to your site will get the content they want almost instantly. It doesn’t matter whether they’re online, offline, or on an intermittent connection—in some ways, having a flaky connection is worse than having no connection at all. That’s why this pattern can make such a difference to the user experience.
This “cache first, then network” pattern has been labelled Offline First (you can hear the capital letters when people say it). It’s a somewhat misleading moniker. You can’t offer a truly offline-first experience—the user must visit your site at least once to get the benefit. But it’s useful shorthand for a way of thinking about how people might interact with your site.
This approach makes no assumptions about the kind of network connection someone might have. In much the same way that a service worker can be thought of as an enhancement to your existing site, the Offline First approach treats the network itself as an enhancement.
There are some situations where you can apply Offline First thinking to the entire site. An in-browser game that doesn’t include team play could be cached in its entirety. I published a book online at resilientwebdesign.com that doesn’t require an internet connection to be read. The contents of the book hardly ever change (apart from the occasional fixed typo), so caching the entire thing feels like a safe bet.
Still, it’s somewhat presumptuous. After all, people don’t have an infinite amount of room on their devices. So let’s look at other ways to make our service workers as respectful as possible.
Tidying Up