web-development – Pal's Blog
Software webdoc

Offline documentation with webdoc

Before going on a long flight, I download PDFs of reference documentation for whatever software library I will be programming with. Having the documentation handy means, I won’t get stuck on an unfamiliar edge case. It would be very convenient if documentation websites could be cached offline − and that’s why I added an offline storage option in webdoc. The documentation for PixiJS and melonJS can now be downloaded for offline use! I’ll walk you through how I did it − the technique can be replicated for any static website.

How is it done?

It’s done using a service worker!

A service worker is a script that acts as a proxy between a browser and the web server. It can intercept fetches done on the main thread and respond with a cached resource, or a computed value, or allow the request to continue the “normal” way to the web server. The service worker controls a cache storage and can decide to put or delete resources from its caches. Note that this “cache storage” is separate from the browser’s regular HTTP cache.

If your static website is hosted on a free service like GitHub Pages, being able to control the caching policy can be very handy. GitHub Pages’ caching policy sets a 10-minute TTL for all HTTP requests; this adds redundant downloads to repeat visits. A service worker can be leveraged to evict cached resources only when a web page has been modified.

A service worker runs in a separate thread from the page’s main thread. It’s stays active when a device is not connected to the Internet so it can serve cached pages even when it is offline!

webdoc’s caching policy

webdoc’s service worker uses two separate caches:

  1. The “main cache” holds the core assets of the website – react, react-dom, mermaid.js, material-ui icons. These assets have versioned URLs so they won’t need to be cache evicted. The main cache is simple and has no eviction policy.
  2. The “ephemeral cache” holds all assets that might be modified when documentation is regenerated. This is the generated HTML and webdoc’s own CSS / JS. To facilitate cache eviction, webdoc generates a md5 hash of its documentation manifest and inserts it into the website it generates.
    1. This hash is refetched on the main thread and stored in web storage on each page load.
    2. The service worker tags cached resources with the associated hash when they are downloaded. The tagging is done by appending a “x-manifest-hash” header to responses.
    3. A hash mismatch between web storage and a cached response effectuates a refetch from the web server, and the cache is updated with the new response.

Let’s dive into the code


const registration = await navigator.serviceWorker.register(getResourceURI("service-worker.js"));

The first step is to register the service worker so that the browser downloads and runs it. getResourceURI is a helper to locate a static resource in a webdoc site.

Before the main thread can communicate with the service worker, the browser must activate it so the second step is to wait for the registration to activate.

const waitOn = navigator.serviceWorker.controller ?
    Promise.resolve() :
    new Promise((resolve, reject) => {
      const worker = registration.installing ?? registration.waiting ?? registration.active;
      if (!worker) return reject(new Error("No worker found"));
      else {
        worker.onstatechange = (e) => {
          if (e.target.state === "active") {

// This hangs on the first page load because the browser doesn't
// activate the service worker until the second visit.
await waitOn;

navigator.serviceWorker.controller is what lets the main thread control and communicate with service workers. Its value is null until the service worker activates – which is signaled by the “statechange” event on the registration object.

Note that the service worker won’t activate on the first page load; the browser activates it on the second page load. That’s why it’s important to wait for the controller to become non-null.

Hash verification

Once the service worker is registered, the website hash must be downloaded and compared to what is in web storage. The local hash would be null after the service worker is registered for the first time, however; this means a hash mismatch would occur (which is desired)

    // https://github.com/webdoc-labs/webdoc/blob/a52570c22fc3161e1f19f4997eb96081d1ea9d34/core/webdoc-default-template/src/protocol/index.js#L47
    if (!APP_MANIFEST) {
      throw new Error("The documentation manifest was not exported to the website");

    // Use webdoc's IndexedDB wrapper to pull out the manifest URL & hash
    const {manifest, manifestHash, version} = await this.db.settings.get(APP_NAME);
    // Download the latest hash from the server
    const {hash: verifiedHash, offline} = await webdocService.verifyManifestHash(manifestHash);

    // If the manifest URL or hash don't match, when we need to update IndexedDB
    // and send a message to the service worker!
    if (manifest !== APP_MANIFEST ||
          manifestHash !== verifiedHash ||
          version !== VERSION) {
      console.info("Manifest change detected, reindexing");
      await this.db.settings.update(APP_NAME, {
        manifest: APP_MANIFEST,
        manifestHash: verifiedHash,
        origin: window.location.origin,
        version: VERSION,
      if (typeof APP_MANIFEST === "string") {
          type: "lifecycle:init",
          app: APP_NAME,
          manifest: APP_MANIFEST,


The service worker receives the lifecycle:init message on a hash mismatch and uses it to download the manifest data and recache the website if offline storage is enabled.

  case "lifecycle:init": {
    // Parse the message
    const {app, manifest} = (message: SwInitMessage);

    try {
      // Open the database & fetch the manifest concurrently
      const [db, response] = await Promise.all([
        fetch(new URL(manifest, new URL(registration.scope).origin)),
      const data = await response.json();

      // Dump all the hyperlinks in the manifest into IndexedDB. This is used by
      // "cachePagesOffline" to locate all the pages in the website that need
      // downloaded for offline use.
      await db.hyperlinks.put(app, data.registry);

      // Caches the entire website if the user has enabled offline storage
      const settings = await db.settings.get(app);
      if (settings.offlineStorage) await cachePagesOffline(app);
    } catch (e) {
      console.error("fetch manifest", e);



Now let’s walk through how webdoc caches resources on the website. The caching policy is what makes the website work when a user is offline, and it also makes the pages load instantly otherwise. The fetch event is intercepted by the service worker and a response is returned from the cache if available.

// https://github.com/webdoc-labs/webdoc/blob/a52570c22fc3161e1f19f4997eb96081d1ea9d34/core/webdoc-default-template/src/service-worker/index.js#L21
// Registers the "fetch" event handler
self.addEventListener("fetch", function(e: FetchEvent) {
  // Skip 3rd party resources like analytics scripts. This is because
  // the service worker can only fetch resources from its own origin
  if (new URL(e.request.url).origin !== new URL(registration.scope).origin) {

The respondWith method on the event is used to provide a custom response for 1st party fetches. The caches global exposes the cache storage API used here.

  // https://github.com/webdoc-labs/webdoc/blob/a52570c22fc3161e1f19f4997eb96081d1ea9d34/core/webdoc-default-template/src/service-worker/index.js#L26
    // Open the main & ephemeral cache together
    ]).then(async ([mainCache, ephemeralCache]) => {
      // Check main cache for a hit first - since we know hash verification
      // isn't required for versioned assets in that cache
      const mainCacheHit = await mainCache.match(e.request);
      if (mainCacheHit) return mainCacheHit;

      // Check the ephemeral cache for the resource and also pull out the hash
      // from IndexedDB
      const ephemeralCacheHit = await ephemeralCache.match(e.request);
      const origin = new URL(e.request.url).origin;
      const db = await webdocDB.open();
      const settings = await db.settings.findByOrigin(origin);

      if (settings && ephemeralCacheHit) {
        // Get the tagged hash on the cached response. Remember responses are
        // tagged using the x-manifest-hash header
        const manifestHash = ephemeralCacheHit.headers.get("x-manifest-hash");

        // If the hash matches, great!
        if (settings.manifestHash === manifestHash) return ephemeralCacheHit;
        // Otherwise continue and fetch the resource again    
        else {
          console.info("Invalidating ", e.request.url, " due to bad X-Manifest-Hash",
            `${manifestHash} vs ${settings.manifestHash}`);

If the main & ephemeral cache don’t get hit, then the resource is fetched from the web server by the service worker. A fetchVersioned helper is used to add the “x-manifest-hash” header to the returned response. The response is put into the appropriate cache so a future page load doesn’t cause a download.

// https://github.com/webdoc-labs/webdoc/blob/a52570c22fc3161e1f19f4997eb96081d1ea9d34/core/webdoc-default-template/src/service-worker/index.js#L49
      try {
        // Fetch from the server and add "x-manifest-hash" header to response
        const response = await fetchVersioned(e.request);

        // Check if the main cache can cache this response
        if (VERSIONED_APP_SHELL_FILES.some((file) => e.request.url.endsWith(file))) {
          await mainCache.put(e.request, response.clone());
        // Check if the ephemeral cache can hold the response (all HTML pages are included)
        } else if (
          settings && (
            EPHEMERAL_APP_SHELL_FILES.some((file) => e.request.url.endsWith(file)) ||
          e.request.url.endsWith(".html"))) {
          await ephemeralCache.put(e.request, response.clone());

        return response;
      } catch (e) {
        // Finish with cached response if network offline, even if we know it's stale
        if (ephemeralCacheHit) return ephemeralCacheHit;
        else throw e;

Note that at the bottom, a catch block is used to return a cached response even if we know the hash didn’t match. This occurs when the resource is stale but the user isn’t connected to the Internet so downloading the latest resource from the web server isn’t possible.

webdoc is the only documentation generator with offline storage that I know of. It supports JavaScript and TypeScript codebases. Give it a try and let me know what you think!


PixiJS Tilemap Kit 3

In my effort to bring tighter integration to the PixiJS ecosystem, I’m upgrading external PixiJS packages and working towards lifting them to the standard of the main project. @pixi/tilemap 3 is the first package in this process. Yes, I’ve republished pixi-tilemap as @pixi/tilemap.

Here, I want to cover the new, leaner API that @pixi/tilemap 3 brings to the table. This package by Ivan Popleyshev gives you an optimized rectangular tilemap implementation you can use to render a background for your game or app composed of tile textures. The documentation is available at https://api.pixijs.io/@pixi/tilemap.html.


A tileset is the set of tile textures used to build the scene. Generally, you’d want the tileset to be in one big base-texture to reduce the number of network requests and improve rendering batch efficiency.

To use @pixi/tilemap, you’ll first need to export a tileset atlas as a sprite sheet. PixiJS’ spritesheet loader populates your tile textures from the sheet’s manifest. If you don’t have one at hand, you can create a sample tileset as follows:

  • Download this freebie tileset from CraftPix.net here: https://craftpix.net/download/24818/. You’ll need to signup, however.
  • Download and install TexturePacker: https://www.codeandweb.com/texturepacker
  • Drag the “PNG” folder of the downloaded tileset into TexturePacker. It will automatically pack all the tiles into one big atlas image.
  • Then click on “Publish sprite sheet” and save the manifest.
The generate tileset should look like this!


The Tilemap class renders a tilemap from a predefined set of base-textures containing the tile textures. Each rendered tile references its base-texture by an index into the tileset array. This tileset array is first passed when the tilemap is created; however, you can still append base-textures without changing previously added tiles after it is instantiated.

The following example paints a static tilemap from a CraftPix tileset.

The texture passed to tile() must be one of the atlases in the tilemap’s tileset. Otherwise, the tilemap will silently drop the tile. As we’ll discuss later on, CompositeTilemap can be used to get around this limitation.

Animated Tiles

The options passed to tile let you animate the rendered tile between different tile textures stored in the same base-texture. The different frames must be located uniformly in a table (or a single row/column).

The texture you pass to tile will be the first frame. Then the following parameters specify how Tilemap will find other frames:

  • animX: The x-offset between frame textures.
  • animY: The y-offset between frames.
  • animCountX: The number of frame textures per row of the table. This is 1 by default.
  • animCountY: The number of frames per column of the table. This is 1 by default.

If your frames are all in a row, you don’t need to specify animY and animCountY.

The animation frame vector (tileAnim) specifies which frame to use for all tiles in the tilemap. tileAnim[0] specifies the column modulo, and tileAnim[1] specifies the row modulo. Since it wraps around when the column/row is fully animated, you don’t have to do it yourself.

The above example takes advantage of the fact that some regular doors and wide doors are placed in a row in the sample atlas. animX, animCountX are used to animate between them every 500ms.

Tileset limitations

Tilemap renders all of the tilemap in one draw call. It doesn’t intermediate batches of geometry like PixiJS’ Graphics. All the tileset base-textures are bound to the GPU together.

This means that there’s a limit to how many tile sprite sheets you can use in each tilemap. WebGL 1 guarantees that at least 8 base-textures can be used together; however, most devices support 16. You can check this limit by evaluating


If your tileset contains more base-textures than this limit, Tilemap will silently fail to render its scene.

If you’re using only one sprite sheet like the examples above, you don’t need to worry about hitting this limit. Otherwise, CompositeTilemap is here to help.


A “tilemap composite” will layer tilesets into multiple tilemaps. You don’t need to predefine the base-textures you’re going to use. Instead, it will try to find a tilemap with the same base-texture in its tileset when you add a tile; if none exists, the base-texture is added into a layered tilemap’s tileset. New tilemaps are automatically created when existing ones fill up.

In most cases, you can trivially swap usage of Tilemap with CompositeTilemap. However, you have to be careful about z-ordering. The tiles using textures in later tilemaps will always render above. This may become a problem with overlapping tiles in some cases.

The following example uses a CompositeTilemap to render one of the previous examples. Instead of using a separate Sprite for the background, it adds the background itself as a tile too.

Tilemap rendering


Tilemap internally stores tiles in a geometry buffer, which contains interleaved data for each vertex.

  • Position (in local space)
  • Texture coordinates
  • Texture frame of the tile
  • Animation parameters (specially encoded into a 32-bit 2-vector)
  • Texture index (into the tileset)

This buffer is mostly static and is lazily updated whenever the tiles are modified between rendering ticks. If the tilemap is left unchanged, the geometry is used directly from graphics memory.


The TileRenderer plugin holds the shared tilemap shader.

The vertex program decodes the animation parameters, calculates and passes the texture frame, texture coordinates, and texture index to the fragment program. The animation frame vector is passed as a uniform. 

Then, the fragment program samples the texel from the appropriate texture and outputs the pixel.


@pixi/tilemap’s settings (which is discussed further on) contains a property called TEXTILE_UNITS. This is the number of tile base-texture that are “sewn” together when uploaded to the GPU. You can use this to increase the tileset limit per texture.

The “combined” texture is called a textile. The textile is divided into a 2-column table of square slots. Each slot is a square of size TEXTILE_DIMEN. Your tileset base-textures must be smaller than this dimension for the textile to work.

The following demonstration shows what a textile looks like when uploaded. The textile-tile dimension was set to 256 so that images aren’t spread out too far (it is 1024 by default). 


@pixi/tilemap exports a “settings” object that you should configure before a tilemap is created.

  • TEXTURES_PER_TILEMAP: This is the limit of tile base-textures kept in each layer tilemap of a composite. Once the last tilemap is filled to this limit, the next texture will go into a new tilemap.
  • TEXTILE_DIMEN, TEXTILE_UNITS: Used to configure textiles. If TEXTILE_UNITS is set to 1 (the default), textiles are not used.
  • TEXTILE_SCALE_MODE: Used to set the scaling mode of the resulting textile textures.
  • use32bitIndex: This option enables tilemaps’ rendering with more than 16K tiles (64K vertices).
  • DO_CLEAR: This configures whether textile slots are cleared before the tile textures are uploaded. You can disable this if tile textures “fully” cover TEXTILE_DIMEN and leave no space for a garbage background to develop.

Canvas support

@pixi/tilemap has a canvas fallback, although it is significantly slower. In the future, I might spin out a @pixi/canvas-tilemap to make this fallback optional.