Shukant Pal – Pal's Blog

Pitfalls of debounced functions

The debounce technique is used to delay processing an event or state until it stops firing for a period of time. It’s best used for reactive functions that depend only a current state. A common use case is debouncing form validation – say you want to show “weak password” only if the user has stopped typing out a password.

Lodash’s debounce takes a callback to be debounced and wraps it such that the callback is invoked only once for each burst of invocations less than “n” milliseconds apart. It also provides a timeout in case you need a guarantee that an invocation eventually does occur.

Parameter aggregation

This “vanilla” debounce assumes that the end application state is unaffected by whether the callback is invoked once or multiple times. This is true if the debounced callback entirely overwrites the same piece of state in your application. In a password validator, the “password strength” state is recalculated each time. The password strength doesn’t depend on past values of any other state. That’s why the validator can be safely be debounced.

If you use debounce as a general-purpose optimization, you’ll find that this assumption is often false. A simple example is a settings page with multiple toggles that each map to a different setting in the database.

const updateSettings = debounce((diff/*: Partial<{
  username: string,
  password: string,
}>*/) => {
  fetch('/v1/api/settings', {
    method: 'POST'
    body: JSON.stringify(diff),

Say the user changes their username and then password. The above debounced callback will save the last change βˆ’ their password, but not their username. A correct implementation here would aggregate the modifications on each invocation and send the aggregate changes to the server at the end. I’ve seen this mistake quite a few times. For another practial example, a debounced undo / redo function will not behave the way you’d want it to (but that’s a not so subtle compared to the settings example!)

I propose an alternate debounce implementation that accepts an aggregator. The aggregator is a callback that is not debounced ands merges the arguments for each invocation in a lightweight fashion. The aggregator equivalent to no aggregation would be:

const no_aggregation = (previousArguments, arguments) => arguments

debounce(passwordValidator, {
  aggregator: no_aggregation, // or optional

But in the case of the settings updates, you would do a deep merge of each diff:

const updateSettings = debounce((diff/*: Partial<{
  username: string,
  password: string,
}>*/) => {
  fetch('/v1/api/settings', {
    method: 'POST'
    body: JSON.stringify(diff),
}, {
  aggregate: (previousDiff, diff) => ({

This should work for most use-cases. It’s important to make sure the debounced callback doesn’t read global state because that cannot be aggregated easily.

Partitioned debouncing

An interesting puzzle for me was when I added group selection for a 2D editor. I debounced sending new item positions to the server, which worked well for the single-item editor use-case:

const save = debounce((itemId, x, y) => {
  server.send('position-changed', { itemId, x, y })

However, when I enabled multiple item selection and dragged a group around βˆ’ each item would end up in the wrong place (save for one). In hindsight, this “obviously” was because debouncing individual saves meant that each only 1 furniture would end up being saved at a time.

Debounce on individual saves causing

A way to solve this is debouncing saves per item instead of across all saves. Theoretically, we could’ve applied this to the settings example too βˆ’ debouncing the saving of each setting individually (although you’d end up with multiple network requests)

A powerful debounce implementation could accept a “partitioning” callback that returns a string unique to contexts debounced individually. Somewhat as follows:

const save = debounce((itemId, x, y) => {
  server.send('position-changed', { itemId, x, y })
}, {
  partitionBy: (itemId) => itemId

The implementation would internally map each partition string to independent timers.

Another practical use-case for this this would be a message queue where you need to debounce messages partitioned by message type or user, so that they can be rate limited.

Debounce β‰  Async queue

Another misassumption I’ve seen is that debounced async callbacks won’t have concurrent executions.

Say, for example, that you are debouncing a toggle that opens a new tab. To open the tab, some data must be loaded asynchronously. The reverse is not true and the tab is closed synchronously. You’ve chosen to debounce this toggle to prevent double or triple clicks from borking the application.

Now, what happens when the user clicks on the toggle twice (but not double click)

  • First click, toggle on
  • Debounce timeout
  • Data starts being loaded
  • Second click, toggle off
  • Debounce timeout
  • Tab isn’t open yet, so second click does nothing
  • Data loaded, tab opened, & user annoyed πŸ˜›

The expected behavior was for the tab to close or not open after the second click. This issue is not related to debouncing βˆ’ you need to either cancel the first async operation and if that’s not possible, each click needs to be processed off a queue. That way the second click is processed once the data has loaded.

This is a bad example, however, because it’s easy to simply cancel rendering a tab. In distributed applications, this isn’t necessarily the case because messages could be processed on some unknown server.

An async debounce would wait for the last invocation to resolve before doing another invocation. A rough implementation would be as follows:

let promise = null;
let queued = false;

function debounced_process() {
  if (!promise)
    let thisPromise;
    thisPromise = promise = process().then(() => {
      if (promise === thisPromise) // reset unless queued
        promise = null;
  else if (!queued) {
    queued = true;
    let thisPromise;
    thisPromise = promise = promise.then(() => {
       queued = false;
       return process();
    }).then(() => {
      if (promise === thisPromise)
         promise = null;
  } else {
    // We've already queued "process" to be called again after
    // the current invocation. We shouldn't queue it again.

This specialized debounce is perhaps better than the vanilla debounce for long-running async tasks.

Software webdoc

Offline documentation with webdoc

Before going on a long flight, I download PDFs of reference documentation for whatever software library I will be programming with. Having the documentation handy means, I won’t get stuck on an unfamiliar edge case. It would be very convenient if documentation websites could be cached offline βˆ’ and that’s why I added an offline storage option in webdoc. The documentation for PixiJS and melonJS can now be downloaded for offline use! I’ll walk you through how I did it βˆ’ the technique can be replicated for any static website.

How is it done?

It’s done using a service worker!

A service worker is a script that acts as a proxy between a browser and the web server. It can intercept fetches done on the main thread and respond with a cached resource, or a computed value, or allow the request to continue the “normal” way to the web server. The service worker controls a cache storage and can decide to put or delete resources from its caches. Note that this “cache storage” is separate from the browser’s regular HTTP cache.

If your static website is hosted on a free service like GitHub Pages, being able to control the caching policy can be very handy. GitHub Pages’ caching policy sets a 10-minute TTL for all HTTP requests; this adds redundant downloads to repeat visits. A service worker can be leveraged to evict cached resources only when a web page has been modified.

A service worker runs in a separate thread from the page’s main thread. It’s stays active when a device is not connected to the Internet so it can serve cached pages even when it is offline!

webdoc’s caching policy

webdoc’s service worker uses two separate caches:

  1. The “main cache” holds the core assets of the website – react, react-dom, mermaid.js, material-ui icons. These assets have versioned URLs so they won’t need to be cache evicted. The main cache is simple and has no eviction policy.
  2. The “ephemeral cache” holds all assets that might be modified when documentation is regenerated. This is the generated HTML and webdoc’s own CSS / JS. To facilitate cache eviction, webdoc generates a md5 hash of its documentation manifest and inserts it into the website it generates.
    1. This hash is refetched on the main thread and stored in web storage on each page load.
    2. The service worker tags cached resources with the associated hash when they are downloaded. The tagging is done by appending a “x-manifest-hash” header to responses.
    3. A hash mismatch between web storage and a cached response effectuates a refetch from the web server, and the cache is updated with the new response.

Let’s dive into the code


const registration = await navigator.serviceWorker.register(getResourceURI("service-worker.js"));

The first step is to register the service worker so that the browser downloads and runs it. getResourceURI is a helper to locate a static resource in a webdoc site.

Before the main thread can communicate with the service worker, the browser must activate it so the second step is to wait for the registration to activate.

const waitOn = navigator.serviceWorker.controller ?
    Promise.resolve() :
    new Promise((resolve, reject) => {
      const worker = registration.installing ?? registration.waiting ??;
      if (!worker) return reject(new Error("No worker found"));
      else {
        worker.onstatechange = (e) => {
          if ( === "active") {

// This hangs on the first page load because the browser doesn't
// activate the service worker until the second visit.
await waitOn;

navigator.serviceWorker.controller is what lets the main thread control and communicate with service workers. Its value is null until the service worker activates – which is signaled by the “statechange” event on the registration object.

Note that the service worker won’t activate on the first page load; the browser activates it on the second page load. That’s why it’s important to wait for the controller to become non-null.

Hash verification

Once the service worker is registered, the website hash must be downloaded and compared to what is in web storage. The local hash would be null after the service worker is registered for the first time, however; this means a hash mismatch would occur (which is desired)

    if (!APP_MANIFEST) {
      throw new Error("The documentation manifest was not exported to the website");

    // Use webdoc's IndexedDB wrapper to pull out the manifest URL & hash
    const {manifest, manifestHash, version} = await this.db.settings.get(APP_NAME);
    // Download the latest hash from the server
    const {hash: verifiedHash, offline} = await webdocService.verifyManifestHash(manifestHash);

    // If the manifest URL or hash don't match, when we need to update IndexedDB
    // and send a message to the service worker!
    if (manifest !== APP_MANIFEST ||
          manifestHash !== verifiedHash ||
          version !== VERSION) {"Manifest change detected, reindexing");
      await this.db.settings.update(APP_NAME, {
        manifest: APP_MANIFEST,
        manifestHash: verifiedHash,
        origin: window.location.origin,
        version: VERSION,
      if (typeof APP_MANIFEST === "string") {
          type: "lifecycle:init",
          app: APP_NAME,
          manifest: APP_MANIFEST,


The service worker receives the lifecycle:init message on a hash mismatch and uses it to download the manifest data and recache the website if offline storage is enabled.

  case "lifecycle:init": {
    // Parse the message
    const {app, manifest} = (message: SwInitMessage);

    try {
      // Open the database & fetch the manifest concurrently
      const [db, response] = await Promise.all([,
        fetch(new URL(manifest, new URL(registration.scope).origin)),
      const data = await response.json();

      // Dump all the hyperlinks in the manifest into IndexedDB. This is used by
      // "cachePagesOffline" to locate all the pages in the website that need
      // downloaded for offline use.
      await db.hyperlinks.put(app, data.registry);

      // Caches the entire website if the user has enabled offline storage
      const settings = await db.settings.get(app);
      if (settings.offlineStorage) await cachePagesOffline(app);
    } catch (e) {
      console.error("fetch manifest", e);



Now let’s walk through how webdoc caches resources on the website. The caching policy is what makes the website work when a user is offline, and it also makes the pages load instantly otherwise. The fetch event is intercepted by the service worker and a response is returned from the cache if available.

// Registers the "fetch" event handler
self.addEventListener("fetch", function(e: FetchEvent) {
  // Skip 3rd party resources like analytics scripts. This is because
  // the service worker can only fetch resources from its own origin
  if (new URL(e.request.url).origin !== new URL(registration.scope).origin) {

The respondWith method on the event is used to provide a custom response for 1st party fetches. The caches global exposes the cache storage API used here.

    // Open the main & ephemeral cache together
    ]).then(async ([mainCache, ephemeralCache]) => {
      // Check main cache for a hit first - since we know hash verification
      // isn't required for versioned assets in that cache
      const mainCacheHit = await mainCache.match(e.request);
      if (mainCacheHit) return mainCacheHit;

      // Check the ephemeral cache for the resource and also pull out the hash
      // from IndexedDB
      const ephemeralCacheHit = await ephemeralCache.match(e.request);
      const origin = new URL(e.request.url).origin;
      const db = await;
      const settings = await db.settings.findByOrigin(origin);

      if (settings && ephemeralCacheHit) {
        // Get the tagged hash on the cached response. Remember responses are
        // tagged using the x-manifest-hash header
        const manifestHash = ephemeralCacheHit.headers.get("x-manifest-hash");

        // If the hash matches, great!
        if (settings.manifestHash === manifestHash) return ephemeralCacheHit;
        // Otherwise continue and fetch the resource again    
        else {
"Invalidating ", e.request.url, " due to bad X-Manifest-Hash",
            `${manifestHash} vs ${settings.manifestHash}`);

If the main & ephemeral cache don’t get hit, then the resource is fetched from the web server by the service worker. A fetchVersioned helper is used to add the “x-manifest-hash” header to the returned response. The response is put into the appropriate cache so a future page load doesn’t cause a download.

      try {
        // Fetch from the server and add "x-manifest-hash" header to response
        const response = await fetchVersioned(e.request);

        // Check if the main cache can cache this response
        if (VERSIONED_APP_SHELL_FILES.some((file) => e.request.url.endsWith(file))) {
          await mainCache.put(e.request, response.clone());
        // Check if the ephemeral cache can hold the response (all HTML pages are included)
        } else if (
          settings && (
            EPHEMERAL_APP_SHELL_FILES.some((file) => e.request.url.endsWith(file)) ||
          e.request.url.endsWith(".html"))) {
          await ephemeralCache.put(e.request, response.clone());

        return response;
      } catch (e) {
        // Finish with cached response if network offline, even if we know it's stale
        if (ephemeralCacheHit) return ephemeralCacheHit;
        else throw e;

Note that at the bottom, a catch block is used to return a cached response even if we know the hash didn’t match. This occurs when the resource is stale but the user isn’t connected to the Internet so downloading the latest resource from the web server isn’t possible.

webdoc is the only documentation generator with offline storage that I know of. It supports JavaScript and TypeScript codebases. Give it a try and let me know what you think!


Do not use uncompressed textures

Texture uploading

The standard way to upload a texture to graphics hardware in WebGL is to use texImage2D. It comes from the OpenGL ES glTexImage2D function, which accepts no native multimedia formats but rather a buffer of raw pixel data. That means the WebGL implementation abstracts away an inefficient process of uncompressing images before uploading them to graphics memory.

This under-the-hood decompression takes up CPU cycles when an application loads its assets. Not only that, but uncompressed textures are memory inefficient; for context, a 2048×2048 8-bit RBGA texture takes up a minimum of 16mb graphics texture.

It’s easy to load up many textures and hog hundreds of megabytes in graphics memory. When the GPU memory limit is hit, desktop computers will generally start paging memory to disk, and that causes system-wide graphics pauses lasting for seconds. On mobile devices, the OS kills applications using too much memory immediately to prevent any slowdowns. I’ve noticed iOS Safari will reload the page if it takes too much graphics memory, not excluding the memory used by the browser to render HTML.

Texture compression

I recommend using GPU-compressed texture formats to reduce the resources consumed by an application’s textures. This form of texture compression is designed specifically for hardware-accelerated graphics programs. GPU-compressed formats generally use “block compression”, where blocks of pixels are compressed into a single datapoint. I dive into how you can use them in PixiJS at the end.

The WebGL API provides support for GPU-compressed texture formats through various extensions. A GPU-compressed texture does not get decompressed before being uploaded to graphics memory. Instead, the GPU decompresses texels on-the-fly in hardware when a shader reads the texture. Texture compression is designed to allow “random access” reads that don’t require the GPU to decompress the whole texture to retrieve a single texel.

By using compressed texture formats, a WebGL application can free up CPU cycles spent by the browser decompressing images and reduce the video memory occupied by textures. The latter is especially important on low-end devices or when your application is using a lot of textures.

I built a tool, Zap, to let you generate GPU-compressed textures with no setup!


GPU-compressed textures come with their caveats, like all good things in life!

One trade-off is that GPU-compressed textures take up more disk space than native images, like PNGs, which use more sophisticated compression algorithms. This means downloading a bunch of compressed textures can increasing asset loading time over the network.

GPU-compressed formats are also very platform-dependent. Most graphics cards support only one or two of the several texture compression formats. This is because the decoding algorithm has to be built into the hardware. And so, having a fallback to native images is still necessary; maintaining images in several different compressed formats is also a bunch of homework. PixiJS’ @pixi/compressed-textures allows an application to serve a texture manifest that lists the available formats; then, the texture loader picks the appropriate version based on the device.


“Supercompressed” texture formats like Basis solve the disk size and platform-dependency problems of GPU-compressed formats. A supercompressed texture is an intermediate format served over the network and then transcoded into a supported GPU-compressed format on the client. Basis provides a transcoder build that can be invoked from JavaScript. As a fallback, the Basis transcoder also allows decoding to an uncompressed 16-bit RGB texture.

The basis transcoder is an additional ~500kb of JavaScript / WebAssembly code (without gzip compression). Fetching and executing it adds a tiny overhead when initializing, but that should be worth it if you use more than a megabyte of images. The Basis supercompressed format is still smaller than native formats like PNG, so you might actually save download time on average.

Testing how much GPU memory is saved

If you’ve kept reading to this point, you might be thinking about how to know if using compressed textures is worth it?

I made two sample apps that load the same 600x400px texture 100 times, one using uncompressed textures, and the other using compressed textures. A small canvas is used to reduce the framebuffer’s effect on memory usage. I used PixiJS because PixiJS 6’s @pixi/compressed-textures has out-of-the-box support for compressed formats!

You can open the sample apps in Chrome and open the browser task manager. Note that you might have to wait up to 30 seconds for them to load because Replit seems to throttle the image downlink. To view the GPU memory of each process, you’ll need to enable that column.

The uncompressed sample (above) takes 100mb of GPU memory.

While the compressed sample takes only 30mb βˆ’ that’s 70% less hogged memory! PixiJS also has to create an HTMLImageElement for each native image texture, and you can see that also affects the main memory usage.

Of course, the trade-off is in the 4-5xed download size of the textures (6mb vs. 25mb). As I mentioned earlier, if you’re downloading more than a megabyte of textures βˆ’ it’s worth using supercompression to save bandwidth.

PixiJS 6’s @pixi/basis adds support for using Basis textures. To test Basis, I generated a Basis texture from Zap and plugged it into this sample.

The results are similar to that of the compressed texture sample; in this case, PixiJS chose a more compact compressed format (DXT1) than the one I uploaded in the prior sample (DXT5) so GPU memory usage has further decreased.

Moreover, this sample fetches all textures in just 1.7mb of network usage!

Notice the “dedicated worker” row in the task manager. @pixi/basis creates a pool of workers for transcoding so the application UI does not slow down.

Try it out using Zap

Zap is a tool I built to help you get started with texture compression. Traditional tools like Compressonator, NVIDIA’s Texture Tools, PVRTexTool are clunky, OS-specific, and have a steep learning curve. I literally had to install Windows to test out Compressonator, and it was really slow.

Zap is a simple web app that lets you upload a texture to be processed by my server. It supports 10 different compression formats plus the UASTC basis supercompression format. Not only that, it’s free to use (for now πŸ˜€).

To use Zap, simply upload a native image and select the compression formats you want. That will redirect you to a permanent link, at which the compressed textures will be available after processing. It may take several seconds on larger images. Note the compressed textures will be deleted after a few days.

PixiJS Compressed Textures Support

PixiJS 6 supports most of the GPU-compressed formats out of the box (exception being BC7). You can use them just like you use native images.

To use the Basis format, you need to import BasisLoader from @pixi/basis, load the transcoder, and register the loader. Then, the PixiJS Loader API can be used in the standard manner:

// Include the following script, if not using ESM
// <script src=""></script>

// Load transcoder from JSDeliver

// Without this, PixiJS can't decompress *.basis files!

// Make sure the BasisLoader is being used!

// Usage:
    .add("your-file.basis", "your-file.basis")
    .load((_, resources) => {
       // Use this texture!
       const texture = resources['your-file.basis'];

Hey there, I’m Shukant and I’m building the future of work at Teamflow, the best virtual office for remote companies. Thanks for visiting my site!


PixiJS Picture Kit

Ivan Popleyshev has been working hard upgrading the libraries he has authored to PixiJS 6. The next one up is PixiJS Picture Kit – a shader-based implementation for blending modes that WebGL doesn’t actively support. Apart from blending, the “backdrop” texture it exposes can be used for other kinds of filtering.

Blend Modes

This section goes over blend modes and how they work in WebGL.

The blend mode defines how colors output by the fragment shader are combined with the framebuffer’s colors. In more simple terms, a blend mode is used to mix the colors of two objects where they overlap. Below, blend modes supported in PixiJS are shown by rendering a semi-transparent blue square over a red circle:

A showcase of all the blends modes available in PixiJS. In the 4th column here are the blend modes PixiJS Picture adds. Click on the image to edit the code!

The normal blend mode makes the source color, blue, appear in the foreground over the destination color, red, in the background.

Porter Duff operations

The blend modes in the 2nd and 3rd columns have suffixes OVER, IN, OUT, and ATOP describing Porter Duff operations. These represent image compositing operations.

  • OVER – colors are mixed for the whole object being rendered
  • IN – only overlapping areas are rendered
  • OUT – colors are outputted only in non-overlapping areas
  • ATOP – colors are outputted only over existing content

In PixiJS, the blend modes only apply to pixels in the object being rendered, so the compositing operations look a bit different. For example, SRC_IN and SRC_ATOP look the same when. An actual IN operation would erase non-overlapping areas in the red circle. But since PixiJS only applies the blend mode in the blue square’s area, this is not possible with blending.

The blend modes with prefix DST switch which color is in the foreground. Even though the blue squares are rendered after the red circles, they are behind with DST blend modes. The DST_OVER blend mode will make a scene appear as if z-indices were reversed.


The blend modes in the 1st column change the arithmetic used to mix the source and destination color.

  • ADD – Sums the source and destination color with equal weighting instead of alpha-weighting
  • SUBTRACT – Subtracts the source color from the destination. Negative values are clipped to zero.
  • MULTIPLY – The colors are multiplied, which always results in darker colors.

Blend equation

The blend equation is a linear function on the source color and destination color that calculates the output color. This equation can be set separately for the RGB and alpha components of colors.

Blend equation

blendFunc is used to set the weights for the source and destination colors. Instead of passing predefined values for these weights, a WebGL constant representing these weights needs to be passed. For example, gl.DST_ALPHA will set the weight to the destination color’s alpha.

For the normal blend mode, you’d use:

gl.blendFunc(gl.ONE, gl.ONE_MINUS_SRC_ALPHA)

blendFuncSeparate can be used to separately set these weighting for the RGB and alpha channel of colors.


blendEquation sets which equation is used to mix the colors after they’ve been multiplied by weights. You can add, subtract, reverse subtract, and even min / max. Most blend modes use the add equation.

blendEquation basically sets the function “f” in the blend equation shown before. These are the different functions you can use.

blendEquationSeparate can be used to separately set which equation is used to mix the RGB and alpha channels of colors.


The StateSystem manages the blend mode for the PixiJS renderer. It works by mapping BLEND_MODES to the parameters for blendFunc and blendEquation described above. If you want to add more blend modes of your own, you can modify the undocumented blendModes map in the state system

blendModes basically maps each blend-mode to a list of parameters to blendFuncSeparate and blendEquationSeparate. These lists can have up to 8 parameters but only the first two are required. The ADD equation is used by default.

import { BLEND_MODES } from 'pixi.js';

const stateSystem = renderer.state;


// Add OVERWRITE blend mode and set it to a unique value!

// This blend mode will basically overwrite the destination
// color with the source color. The destination has zero
// weight in the output.
renderer.state.blendModes[BLEND_MODES.OVERWRITE] = [,,

In the above snippet, I create a OVERWRITE blend mode that will make the background disappear wherever an object is rendered and only keep its pixels in the framebuffer.

Custom blending with shaders

The “fixed function” blend modes shown so far are perhaps limited. The normal blend mode is by far used for alpha compositing. The other Porter Duff variants can be used for masking. To make more complicated and artistic blend modes, a shader that samples the “source” and “destination” colors from textures is used.

PixiJS Picture implements this kind of shader as a filter. The source object is rendered into a filter texture. The destination pixels from the framebuffer or canvas are copied into a backdrop texture. The dimensions of these textures must be the same.

// Fragment shader

// Texture coordinates for pixels to be blended.
varying vec2 vTextureCoord;

// Filter texture with source colors
uniform sampler2D uSampler;

// Backdrop texture with destination colors
uniform sampler2D uBackdrop;

This type of filter is called a BlendFilter. The fragment shader in BlendFilter is a template:

void main(void)
   vec2 backdropCoord = vec2(vTextureCoord.x, uBackdrop_flipY.x + uBackdrop_flipY.y * vTextureCoord.y);
   vec4 b_src = texture2D(uSampler, vTextureCoord);
   vec4 b_dest = texture2D(uBackdrop, backdropCoord);
   vec4 b_res = b_dest;


   gl_FragColor = b_res;
  • b_src is the source color sampled from the filter texture
  • b_dest is the destination color sampled from the backdrop texture
  • b_res is the output color calculated from the source and destination colors by the blending code. It’s set to the destination color by default.

When a BlendFilter is constructed, the %blendCode% token is replaced by the blending code, which calculates the output color. This way multiple blend modes can be implemented by just writing the blending code for each one. You can find examples of these shader parts in the source code. To emulate the normal blend mode, the blending code would look something like this:

// Note: b_src and b_dest are premultiplied by alpha like
// all other colors in PixiJS.

b_res = b_src.a + (1 - b_src.a) * b_dest;

To use this code as a blend filter, you can construct a BlendFilter and apply it to a display object.

import { BlendFilter } from '@pixi/picture';

// Blending code for normal blend mode
    b_res = b_src.a + (1 - b_src.a) * b_dest;

// Create globally shared instance of blend filter. This is a
// good optimization if you're going to use the filter on multiple
// objects.
const normalBlendFilter = new BlendFilter({
    blendCode: NORMAL_SHADER_FULL,

// Apply the filter on the source object
sourceObject.filters = [normalBlendFilter];

Built-in blend filters

PixiJS Picture implements filters for these blend modes:


getBlendFilter maps each blend mode to instances of their blend filters, which can be applied on a display object to emulate the blend mode.

import { BLEND_MODES } from 'pixi.js';

sourceObject.filters = [getBlendFilter(BLEND_MODES.OVERLAY)];

PixiJS Picture also exports special versions of Sprite and TilingSprite where you can set the blendMode directly and a blend filter is implicitly applied:

// Note: Use the Sprite exported from @pixi/picture and
// not the default one from pixi.js!
import { Sprite } from '@pixi/picture';

// Set the blendMode on the sprite directly. When the
// sprite renders, it will use the blend filter from
// getBlendFilter() automatically.
Sprite.blendMode = BLEND_MODES.OVERLAY;


The dissolve blend mode randomly chooses pixels from the source or destination texture to output. The likelihood of choosing the source color is equal to its alpha, i.e. a 0.5 alpha means half of the output pixels will come from the top layer and the rest will be from the bottom layer. In this mode, colors aren’t truly “mixed”.

(left) The dissolve blend mode vs (right) the normal blend mode. Click to see the example live!

The blending code for this is really simple:

// Noise function that generates a random number between 0 and 1
float rand = fract(sin(dot(
    vTextureCoord.xy ,vec2(12.9898,78.233))) * 43758.5453);

if (rand < b_src.a) {
    b_res = b_src;

The famous one-liner rand is used to generate a random number between 0 and 1. If this random variable is less than the alpha of the source color, then the resulting color is set equal to the source color. Otherwise, the resulting color is set to the destination color (by default).

A BlendFilter is needed to use it:

import { BlendFilter } from '@pixi/picture';

// Copy blendingCode
    // Noise function that generates a random number between 0 and 1
    float rand = fract(sin(dot(
        vTextureCoord.xy ,vec2(12.9898,78.233))) * 43758.5453);

    if (rand < b_src.a) {
        b_res = b_src;

// Create blend filter
const dissolveBlendFilter = new BlendFilter({
    blendCode: DISSOLVE_FULL,

// Apply it!
sourceObject.filters = [dissolveBlendFilter];

You can also augment BLEND_MODES and create a DISSOLVE blend mode. The blendFullArray exported from @pixi/picture contains the blending code for each mode – the dissolve code needs to be added as well.

import { BLEND_MODES } from 'pixi.js';
import { Sprite, blendFullArray } from '@pixi/picture';

// Any non-conflicting number high enough works here!

// Register the blending code with @pixi/picture

// Set it on a PixiJS Picture sprite
new Sprite().blendMode = BLEND_MODES.DISSOLVE;

Now, you can set the blendMode to dissolve directly on a PixiJS Picture sprite.

Backdrop filters

Blend filters use the backdrop texture to read the destination color. If you have imported @pixi/picture, you can use the backdrop in other filters as well!

PixiJS Picture augments the filter system so that it copies pixels from framebuffer / canvas into the backdrop texture before a filter stack is applied. This backdrop texture is then available to filters as a uniform. The name of the uniform is configured by the backdropUniformName property. For BlendFilter, this is set to uBackdrop.

import { BackdropFilter } from '@pixi/picture';

const fragmentSrc = `
    // The filter texture containing the object being rendered
    uniform sampler uSampler;

    // The backdrop texture
    uniform sampler2D uBackdrop;

    // TODO: Your shader code

class CustomBackdropFilter extends BackdropFilter {
    constructor() {
        super(/* vertexSrc, fragmentSrc */);
        // Set the backdropUniformName so the backdrop
        // texture is available to shader code.
        this.backdropUniformName = 'uBackdrop';

Magnifying glasses

Ivan shows how you can use the backdrop texture with his “magnifying glasses” example:

Click to see the example live!

The grass background is rendered first. The two “lens” sprites are then rendered with a “displacement” filter. The lens texture is a displacement map – each texel encodes how much each pixel must be displaced.

The displacement texture

The R channel holds the displacement in the x-direction and the G channels holds it for the y-direction. The (R, G) values are centered at (0.5, 0.5) and then scaled by a certain magnitude.

x = (r - 0.5) * scale;
y = (g - 0.5) * scale;

The centering is done because color values must be between 0 and 1, and displacements can have negative values.

The displacement filter samples the “lens” texture and calculates the displacement vector for the current pixel. It then samples the backdrop texture by adding this displacement to the passed texture coordinates:

// Read the displacement data from the lens texture
vec4 map =  texture2D(uSampler, vTextureCoord);

// Map it into the displacement vector
map -= 0.5;
map.xy *= scale *;

// Add the displacement vector to the texture coordinates,
// and then clamp it in case it goes outside the
// the backdrop texture.
vec2 dis = clamp(vec2(vTextureCoord.x + map.x, vTextureCoord.y + map.y), inputClamp.xy,;

// Handle y-flipping
dis.y = dis.y * backdropSampler_flipY.y + backdropSampler_flipY.x;

// Sample backdrop and output color
gl_FragColor = texture2D(backdropSampler, dis);

Mask filter

The MaskFilter allows you to apply a filter on the backdrop wherever it overlaps with a mask. A common use case is the backdrop blur effect, which can be implemented by simply passing a blur filter to MaskFilter:

// Gaussian blur filter
import { BlurFilter, Graphics } from 'pixi.js';

// Masking filter
import { MaskFilter } from '@pixi/picture';

const mask = new PIXI.Graphics()
  .beginFill(0xffffff, 1)
  .drawRect(0, 0, 100, 100);

mask.filters = [
  new MaskFilter(new BlurFilter()),

The above have the effect of blurring the background in the rectangle (0, 0, 100, 100). The white rectangular mask itself won’t be visible. Instead, another translucent white rectangle must be added so it appears visible.

The backdrop blur example

If you’ve been reading up until here, I’m glad this article was informative. As the software industry goes remote, we all need a new office. Check out Teamflow, a virtual office built for the future.


PixiJS Layers Kit

Ivan Popelyshev recently migrated the PixiJS Layers Kit to be compatible with PixiJS 6. I helped him also document the @pixi/layers API here. This package introduces an interesting concept – making the order in which items in your scene tree render separate from the tree’s hierarchy. It allows you to change the display order of display objects by assigning layers.

Let’s start with a scenario in which this might be helpful – notes that can be dragged using handles that are on top. Each note and its handle is kept in the same container – so they move together; however, they need to be in separate “layers” – one below for the items and one above for the handles. In a conventional scene tree, it would not be possible to have this layering without splitting the notes and their handles into separate containers and setting their positions individually.

But @pixi/layers makes this possible! You can group items in your scene tree and render those groups as layers. These items will only render when their parent layer becomes active during rendering.

// The stage to be rendered. You need to use a PIXI.display.Stage
// so that it correctly resolves each DisplayObject to its layer.
const stage = new PIXI.display.Stage();

// Set the parentGroup on items in your scene tree.
const GROUPS = {
  BG: new PIXI.display.Group(),
  FG: new PIXI.display.Group(),

// These groups are rendered by layers.
stage.addChild(new PIXI.display.Layer(GROUPS.BG));
stage.addChild(new PIXI.display.Layer(GROUPS.FG));

// How to make an item so that the handle is above all 
// other content
function makeItem() {
  const item = new PIXI.Container();
  const handle = item.addChild(new PIXI.Graphics());// Do drawing
  const body = item.addChild(new PIXI.Graphics());// Do drawing

  // Set the group of the handle to foreground
  handle.parentGroup = GROUPS.FG:

  // Set the group of the body to background
  body.parentGroup = GROUPS.BG;

  return item;

This changes the display order of items in the scene tree.

How it works

When you import @pixi/layers, it applies a mixin on PixiJS’ DisplayObject and Renderer. It adds these new behaviors:

  • If the scene root is a Stage, the renderer would now call updateStage – which traverses the scene tree and resolves each item to its group and layer.
  • A DisplayObject will only render when its active parent layer is being rendered (which is resolved in the update-stage step).
  • It also patches the Interaction API’s TreeSearch to correctly hit-test a layer-enhanced scene tree.

In the updating phase, every Group with sorting enabled will sort its display objects by their zOrder. This z-order is different from the built-in z-index that PixiJS provides. Both z-index and z-order are used for sorting objects into the order you want them to be rendered. z-order is the implementation provided by @pixi/tilemap. You can use both of them in conjunction – objects will be sorted by z-index first then by z-order.

When a layer renders, it will set the active layer on the renderer – which then indicates that objects in that layer can now render.

An example of how layering changes the display order

Note that Layer extends Container, so you can add children directly to it like Item 3 in the diagram. You don’t have to set parentGroup or parentLayer on these children as it is implicit.

Z-grouping to reduce sorting overhead

In containers with many children, if only a few z-indices are being used they can be replaced with a fixed number of layers. For example, when users edit a group of items they expect them to come on top. Instead of setting a higher z-index on these items, this can be implemented by promoting items to an “editing” layer. This is much easier than shifting items into another container, which interferes with interaction.

This technique replaces sorting with a linear-time traversal of your scene tree. It should especially be used when your scene tree is large.

Using zOrder

A Group will sort its items by their z-order if the sort option is passed:

import { Group } from '@pixi/layers';

// Group that sorts its items by z-order before rendering
const zSortedGroup = new Group(0, true);

If the z-order of a scene is relatively static, it’s more efficient to disable automatic z-order sorting and invoke it manually:

// Don't sort each frame
group.enableSort = false;

// Instead, call this when z-orders change

Another neat feature is that Group emits a sort event for each item before sorting them. This can be used to dynamically calculate the z-order for each item, which is particularly useful when you want to order items based on their y-coordinate:

// You have to enable sort manually if you don't pass
// "true" or a callback to the constructor.
group.enableSort = true;

// Sort items so that objects below (along y-axis) are on rendered over others
group.on('sort', (item) => {
  item.zOrder = item.y;

Check out this example – here the bunnies are moving back and forth but the bottommost bunnies are rendered over others:

Layers as textures

Layers can be rendered into intermediate render-textures out of the box. The useRenderTexture option enables this:

layer.useRenderTexture = true;
layer.getRenderTexture();// Use this texture to do stuff!

When a layer renders into a texture, you won’t see it on the canvas directly. Instead, its texture must be drawn manually onto the screen. This can be done using a sprite:

// Create a Sprite that will render the layer's texture onto the screen.
const layerRenderingSprite = app.stage.addChild(new PIXI.Sprite(layer.getRenderTexture());

The layer’s texture is resized to the renderer’s screen – so too many layers with textures should not be used. This technique can be used to apply filters on a layer indirectly through the sprite rendering its texture:

// Apply a Gaussian blur on the sprite that indirectly renders the layer
layerRenderingSprite.filters = [new PIXI.filters.BlurFilter()];

By rendering into an intermediate texture, layers can be optimized to re-render when their content changes. You can split your scene tree so that layers are rendered separately – and then use the layer textures in the main scene.

// Main application with its own stage
const app = new PIXI.Application();

// The relatively static scene that is rendered separately. This is not
// directly shown on the canvas - later, a sprite is added
// to render its texture onto the canvas.
const staticStage = new PIXI.display.Stage();
const staticLayer = new PIXI.Layer();

staticLayer.addChild(new ExpensiveMesh());
staticLayer.useRenderTexture = true;

// Add a sprite that renders the static scene's snapshot to the main
// scene.
app.stage.addChild(new PIXI.Sprite(staticLayer.getRenderTexture());

// Rerenders the static scene in the next frame before
// the main scene is rendered onto the canvas. This should be invoked
// whenever the scene needs to be updated.
function rerenderExpensiveMeshNextFrame() {
  app.ticker.addOnce(() => {

Double buffering

A beautiful use of layer textures is for showing trails of moving objects in a scene. The trick is to render the last frame’s texture into the current frame with a lower alpha. By applying an alpha, previous frames quickly decay which ensures only a few frames are seen trailing.

Since WebGL does not allow rendering a texture into itself, double buffering is needed. The layer needs to flip-flops between two textures, one for rendering into and one for rendering from. This can be enabled by useDoubleBuffer:

// Ensure the layer renders into a texture instead of the canvas
layer.useRenderTexture = true;

// Enable double buffering for this layer
layer.useDoubleBuffer = true;

Note that useRenderTexture must be enabled for double buffering – not enabling it will result in the layer rendering directly to the canvas.

Now, since the layer flip-flops between rendering into the two textures, the texture used to render the last frame back into the layer needs to flop-flip. The layer kit does this internally by hot swapping the framebuffer and WebGL texture of getRenderTexture() each frame.

// Create a sprite to render the last frame of the layer
const lastFrameSprite = new PIXI.Sprite(layer.getRenderTexture());

// Apply an alpha so the last frame decays a bit
lastFrameSprite.alpha = 0.6;

// Render the last frame back into the layer
layer.addChild(new PIXI.Sprite(layer.getRenderTexture()));

// Render the layer into the main stage
stage.addChild(new PIXI.Sprite(layer.getRenderTexture()));

In the above snippet, sprites are created using the layer’s texture. When it runs, the sprites are actually flip-flopping between two different textures each frame. See it in action by Ivan’s example:


How I wrote Node.js bindings for GNU source-highlight

GNU source-highlight is a C++ library for highlighting source code in several different languages and output formats. It reads from “language definition” files to lexically analyze the source code, which means you can add support for new languages without rebuilding the library. Similarly, “output format” specification files are used to generate the final document with the highlighted code.

I found GNU source-highlight while looking for a native highlighting library to use in webdoc. I added sources publishing support to webdoc 1.5.0, but Highlight.js more than 6xed webdoc’s single thread execution time; after running Highlight.js with 18 worker threads, the final result was still twice the original execution time. So I decided to finally learn how to write Node.js bindings for native C++ libraries!

The Node.js bindings are available on npm – source-highlight. Here, I want to share the process of creating Node.js bindings for native libraries.

Node Addon API – Creating the C++ addon

The node-addon-api is the official module for writing native addons for Node.js in C++. The entry point for addons is defined by registering an initialization function using NODE_API_MODULE:

#include <napi.h>

Napi::Object Init (Napi::Env env, Napi::Object exports) {
    // TODO: Initialize module and expose APIs to JavaScript

    return exports;

NODE_API_MODULE(my_module_name, Init);

Here, the Init function accepts two parameters:

  • env – This represents the context of the Node.js runtime you are working with.
  • exports – This is an opaque handle to module.exports. You can set properties on this object to expose APIs to JavaScript code.

To expose a hello-world “function”, you can set a property on the exports object to a Napi::Function.


// Disable C++ exceptions since we are not using. Otherwise, we'd need to configure
// node-gyp to install them when compiling.

#include <iostream>
#include <napi.h>

// Our C++ hello-world function. It takes a single JavaScript
// string and outputs "Hello <string>" to the standard output.
// It returns true on success, false otherwise.
Napi::Value helloWorld(const Napi::CallbackInfo& info) {
    // Extract our context early so we can use it to create primitive values.
    Napi::Env env = info.Env();

    // Return false if no arguments were passed.
    if (info.Length() < 1) {
        std::cout << "helloWorld expected 1 argument!";
        return Napi::Boolean::New(env, false);

    // Return false if the first argument is not a string.
    if (!info[0].IsString()) {
        std::cout << "helloWorld expected string argument!";
        return Napi::Boolean::New(env, false);

    // Convert the first argument into a Napi::String,
    // then cast it into a std::string.
    std::string msg = (std::string) info[0].As<Napi::String>();

    // Output our "hello world" message to the standard output.
    std::cout << "Hello " << msg;

    // Return true for success!
    return Napi::Boolean::New(env, true);

Napi::Object Init (Napi::Env env, Napi::Object exports) {
    // Wrap helloWorld in a Napi::Function
    Napi::Function helloWorldFn = Napi::Function::New<helloWorld>(env);

    // Set exports.helloWorld to our hello world function
    exports.Set(Napi::String::New(env, "helloWorld"), helloWorldFn);

    return exports;

NODE_API_MODULE(hello_world_module, Init);

helloWorld uses the the Napi::Boolean wrapper to create boolean values. The wrappers for all JavaScript values are listed here. All of these wrappers extend the abstract type Napi::Value.

Instead of declaring a Napi::String parameter and returning a Napi::Boolean directly, helloWorld accepts CallbackInfo reference and returns a Napi::Value. Using this generic signature is required to wrap it in Napi::Function. The CallbackInfo contains the arguments passed by the caller in JavaScript code. To protect from the native code from throwing an exception, the function validates the arguments.

After creating the binary module for this addon, it should be useable from JavaScript:

// hello_world.js

const { helloWorld } = require('./build/Release/hello_world');

helloWorld('world, you did it!');

node-gyp – Building the binary module from C++ code

node-gyp is a cross-platform tool for compiling native Node.js addons. It uses a fork of the gyp meta-build tool – “a build system that generates other build systems”. More specifically, node-gyp will configure the build toolchain specific to your platform to compile native code. Instead of creating a Makefile for Linux, Xcode project for macOS, and a Visual Studio project for Windows – you need to create a single binding.gyp file. node-gyp will handle the rest for you; this is particularly useful when you want the native code to compile on the user’s machine and not serve prebuilt binaries.

To use node-gyp and node-addon-api, you’ll need to create an npm package (run npm init). Then install node-addon-api locally and node-gyp globally,

npm install --save node-addon-api
npm install -g node-gyp

Now, to build the example hello-world addon, we’ll need a very simple binding.gyp configuration:

    "targets": [
            "target_name": "hello_world",
            "sources": [""],
            "include_dirs": [
                "<!@(node -p \"require('node-addon-api').include_dir\")"

This configuration defines one build-target: our “hello_world” addon. We have one source file “”, and we want to include the header files provided by node-addon-api. Here, the <!@(...) directive tells node-gyp to evaluate the code ... and use the resulting string. node-addon-api exports the include_dir variable, which is the path to the directory containing its header files.

We can finally run node-gyp,

# This will create the Makefile/Xcode/MSVC project
node-gyp config

# This will invoke the platform-specific project's build toolchain and build the addon
node-gyp build

You can also run node-gyp rebuild instead of running two commands. node-gyp should now have created the binary module for the hello_world addon at ./build/Release/hello_world.node. You can require or import it like any other Node.js! Run the hello_world.js file to test it!

I created this Replit so you can run the hello_world addon right in your browser!

Linking your C++ addon with another native library

Now that we’ve created a simple addon, we want to write bindings for another library. There are two ways to do this:

  • Include the sources of the native library in your repository (using a git submodule) and include that in your node-gyp sources. node-libpng does this. It has two additional gyp configurations in the deps/ folder for compiling libpng and zlib. Since compiling libraries can slow down an npm install, node-libpng prebuilds the binaries and its install script downloads them. sharp also does this and falls back to locally compiling libvips if there isn’t a prebuilt binary for the client platform.
  • Statically link to library preinstalled on the client machine. The downside of this approach is that your user must install the native library before using your bindings. This might be necessary if the native library uses a sophisticated build system that’s hard to replicate using node-gyp. I did this for node-source-highlight because source-highlight depends on the Boost library.

To statically link to a preinstalled copy of the library on the client machine, you can add the following snippet to your addon target in binding.gyp:

"link_settings": {
    "libraries": [

where <name> is the “name” in “libname” of the library you are linking. For example, you would use “-lsource-highlight” to link to “libsource-highlight”. Now, assuming you’ve correctly installed the native library on your machine, you can use its headers in your C++ code.

You can wrap the underlying APIs of a native library and expose them to JavaScript. In node-source-highlight, the SourceHighlight class wraps an instance of srchilite::SourceHighlight.

// SourceHighlight.h
#include <napi.h>

class SourceHighlight : public Napi::ObjectWrap<SourceHighlight> {
      static Napi::Object Init(Napi::Env env, Napi::Object exports);
      Napi::Value initialize(const Napi::CallbackInfo& callbackInfo);
      srchilite::SourceHighlight instance;

#include <napi.h>
#include "SourceHighlight.h"

Napi::Value SourceHighlight::initialize(const Napi::CallbackInfo& info) {
   return info.Env().Undefined();

Napi::Object SourceHighlight::Init(Napi::Env env, Napi::Object exports) {
    Napi::Function fn = DefineClass(env, "SourceHighlight", {
       InstanceMethod("initialize", &SourceHighlight::initialize)

   Napi::FunctionReference* constructor = new Napi::FunctionReference();
   *constructor = Napi::Persistent(func);


   exports.Set("SourceHighlight", func);

   return exports;

// Initialize native add-on
Napi::Object Init (Napi::Env env, Napi::Object exports) {
    SourceHighlight::Init(env, exports);
    return exports;

NODE_API_MODULE(sourcehighlight, Init);

In this snippet, SourceHighlight::Init does the heavy lifting of creating a class constructor function and attaching it to the exports. The SourceHighlight class holds the underlying srchilite::SourceHighlight instance and each method invokes the corresponding method on that instance after validating the arguments passed.

The full sources are available here.



Thanks to Mat Grove’s work, PixiJS 6.1.0 will ship with support for Uniform Buffer Objects, a WebGL 2 optimization to make uniform uploads faster.


UBOs are handles to GPU buffers that store uniform data. They can be attached to a shader in a single step, without needing to upload each uniform field individually. If you share uniforms between multiple shaders, this can be used to reduce uploads of relatively static data.

Theoretically, you can optimize filters with UBOs. The common uniforms passed to Filter don’t change between them: inputSize, inputPixel, inputClamp, outputFrame.


To use UBOs in a shader, you’ll need to use GLSL 3 ES.

#version 300 es
#define SHADER_NAME Example-Shader

precision highp float;

To migrate an existing GLSL 1 shader (the default), you need to use the in keyword instead of attribute, out instead of varying in the vertex shader, in instead of varying in the fragment shader, and then create an out variable in the fragment shader instead of using gl_FragColor.

You can then move some of your uniforms into a UBO block:

uniform location_data {
  mat3 locationMatrix;

In this example, you can reference the locationMatrix uniform directly.

void main(void) {
  mat3 matrix = locationMatrix;

To upload the UBO uniforms, you need to add an UniformBufferGroup in PixiJS in your shader’s uniform.

import { Shader, UniformBufferGroup } from 'pixi.js';

Shader.from(vertexSrc, fragmentSrc, {
    location_data: UniformBufferGroup.uboFrom({
        locationMatrix: new Matrix().toArray(),

UniformBufferGroup.uboFrom creates a “static” UBO. If you ever change a value in it, you’ll need to update it.

Here’s an example that applies a gradient color to a texture using a UBO:

When should I use UBOs?

UBOs are useful if you have multiple shaders that share static uniform data. If your uniforms are dynamic and change very often, UBOs will not be much of an optimization.


Federated Events API

PixiJS 6.1.0 will ship with an experimental UI event infrastructure that provides a much more robust and DOM-compatible solution than the incumbent interaction plugin. This change made it through PixiJS’ RFC 7071 and merged in PR 7213.

I named it the “Federated Events API.” It’s federated because you can create multiple event boundaries and override their logic for parts of your scene. Each event boundary only controls events for the scene below them – not unlike a federation.


I developed the Federated Events API to overcome the two significant limitations in the Interaction API –

  • DOM Incompatibility
  • Extensibility

Apart from these API-facing issues, we also needed to refactor the implementation to make it more maintainable.

DOM Incompatibility

The Interaction API had a synthetic InteractionEvent that didn’t overlap with DOM’s PointerEvent well enough. If your UI shared DOM elements, then event handlers had to still be specific to PixiJS or the DOM.

The Federated Events API brings multiple events that inherit their DOM counterparts. This means your event handlers are agnostic to whether they’re looking at a DOM or PixiJS event. DisplayObject now also has addEventListener and removeEventListener methods.

The semantics of some interaction events diverged from those of the Pointer Events API.

  • pointerover and pointerout events didn’t bubble up to their common ancestor.
  • pointerenter and pointerleave events were missing.
  • pointermove events would fire throughout the scene graph, instead of just on hovered display object.

This gets corrected in this new API!

Another important addition is the capture phase for event propagation. The new API’s event propagation matches that of the DOM.


The Interaction API’s implementation was very brittle, and overriding any detail meant hell. The rigid architecture also means that customizing interaction for a part of your scene was impossible.

This new API lets you override specific details of the event infrastructure. That includes:

  • optimizing hit testing (spatial hash acceleration?)
  • adding custom events (focus, anyone?)
  • modifying event coordinates (handy if you’re using projections)

The API also lets you mount event boundaries at specific parts of your scene graph to override events for display objects underneath it.

Other improvements


The EventSystem is the main point of contact for federated events. Adding it to your renderer will register the system’s event listeners, and once it renders – the API will propagate FederatedEvents to your scene. The EventSystem‘s job is to normalize native DOM events into FederatedEvents and pass them to the rootBoundary. It’s just a thin wrapper with a bit of configuration & cursor handling on top.

The EventBoundary object holds the API’s core functionality – taking an upstream event, translating it into a downstream event, and then propagating it. The translation is implemented as an “event mapping” – listeners are registered for handling specific upstream event types and are responsible for translating and propagating the corresponding downstream events. This mapping isn’t always one-to-one; the default mappings are as follows:

  • pointerdown β†’ pointerdown
  • pointermove β†’ pointerout, pointerleave, pointermove, pointerover, pointerenter
  • pointerup β†’ pointerup
  • pointerout β†’ pointerout, pointerleave
  • pointerover β†’ pointerover, pointerenter
  • wheel β†’ wheel

This list doesn’t include the mouse- and touch-specific events that are emitted too.


An event boundary can search through and propagate events throughout a connected scene graph, which would be connected by the parent-child relationships.

In certain cases, however, you may want to “hide” the implementation scene for an object. @pixi-essentials/svg does this to prevent your scene from being dominated by SVG rendering nodes. Instead of holding the nodes below as children, you place them in a root container and render it separately.

// Crude anatomy of a disconnected scene
class HiddenScene {
  root: Container;
  render(renderer) {

This poses a problem when you want interactivity to still flow through this “point of disconnection”. Here, an additional event boundary that accepts upstream events and propagating them through root can fix this! See the nested boundary example at the end for how.


Basic usage

Since the Federated Events API won’t be production-ready until PixiJS 7, it’s not enabled by default. To use it, you’ll have to delete the interaction plugin and install the EventSystem manually. If you’re using a custom bundle, you can remove the @pixi/interaction module too.

import { EventSystem } from '@pixi/events';
import { Renderer } from '@pixi/core';// or pixi.js

delete Renderer.__plugins.interaction;

// Assuming your renderer is at "app.renderer"
if (!('events' in app.renderer)) {
    app.renderer.addSystem(EventSystem, 'events');


Let’s start with this barebones example – handling clicks on a display object. Just like the Interaction API, you need to mark it interactive and add a listener.

// Enable interactivity for this specific object. This
// means that an event can be fired with this as a target.
object.interactive = true;

// Listen to clicks on this object!
object.addEventListener('click', function onClick() {
    // Make the object bigger each time it's clicked!
    object.scale.set(object.scale.x * 1.1);

A handy tool for checking handling “double” or even “triple” clicks is the event’s detail. The event boundary keeps track of how many clicks have been done each within 200ms of each other. For double clicks, it’ll be set to 2. The following example scales the bunny based on this property – you have to click fast to make the bunny larger!


Dragging is done slightly differently with the new API – you have to register the pointermove handler on the stage, not the dragged object. Otherwise, if the pointer moves out of the selected DisplayObject, it’ll stop getting pointermove events (to emulate the InteractionManager’s behavior – enable moveOnAll in the root boundary).

The upside is much better performance and mirroring of the DOM’s semantics.

function onDragStart(e) {
    selectedTarget =;

    // Start listening to dragging on the stage
    app.stage.addEventListener('pointermove', onDragMove);

function onDragMove(e) {
    // Don't use because the pointer might
    // move out of the bunny if the user drags fast,
    // which would make become the stage.
    selectedTarget.parent.toLocal(, null, selectedTarget.position);


The wheel event is available to use just like any other! You can move your display object by the event’s deltaY to implement scrolling. This example does that for a slider’s handle.

Right now, wheel events are implemented as “passive” listeners. That means you can’t do preventDefault() to block the browser from scrolling other content; this means you should only use it on fullscreen canvas apps.

slider.addEventListener('wheel', onWheel);

Advanced use-cases

Manual hit-testing

To override a specific part of event handling, you can inherit from EventBoundary and set the event system’s rootBoundary!

Here’s an example that uses a SpatialHash to accelerate hit-testing. A special HashedContainer holds a spatial hash for its children, and that is used to search through them instead of a brute force loop.

This technique is particularly useful for horizontal scene graphs, where a few containers hold most of the display objects as children.

Nested boundaries

The ultimate example: how you can use a nested EventBoundary in your scene graph. As mentioned before, this is useful when you have a disconnected scene graph and you want events to propagate over points of disconnection.

To forward events from upstream, you make the “subscene” interactive, listen to all the relevant events, and map them into the event boundary below. The event boundary should be attached to the content of your scene. It’s like implementing a stripped down version of the EventSystem.

// Override copyMouseData to apply inverse worldTransform 
// on global coords
this.boundary.copyMouseData = (from, to) => {
    // Apply default implementation first
        .call(this.boundary, from, to);

    // Then bring global coords into content's world

// Propagate these events down into the content's
// scene graph!
].forEach((event) => {
        (e) => this.boundary.mapEvent(e),

To make the cursor on internal objects work too, you should expose the event boundary’s cursor property on the subscene.

get cursor() {
    return this.boundary.cursor;

PixiJS Tilemap Kit 3

In my effort to bring tighter integration to the PixiJS ecosystem, I’m upgrading external PixiJS packages and working towards lifting them to the standard of the main project. @pixi/tilemap 3 is the first package in this process. Yes, I’ve republished pixi-tilemap as @pixi/tilemap.

Here, I want to cover the new, leaner API that @pixi/tilemap 3 brings to the table. This package by Ivan Popleyshev gives you an optimized rectangular tilemap implementation you can use to render a background for your game or app composed of tile textures. The documentation is available at


A tileset is the set of tile textures used to build the scene. Generally, you’d want the tileset to be in one big base-texture to reduce the number of network requests and improve rendering batch efficiency.

To use @pixi/tilemap, you’ll first need to export a tileset atlas as a sprite sheet. PixiJS’ spritesheet loader populates your tile textures from the sheet’s manifest. If you don’t have one at hand, you can create a sample tileset as follows:

  • Download this freebie tileset from here: You’ll need to signup, however.
  • Download and install TexturePacker:
  • Drag the “PNG” folder of the downloaded tileset into TexturePacker. It will automatically pack all the tiles into one big atlas image.
  • Then click on “Publish sprite sheet” and save the manifest.
The generate tileset should look like this!


The Tilemap class renders a tilemap from a predefined set of base-textures containing the tile textures. Each rendered tile references its base-texture by an index into the tileset array. This tileset array is first passed when the tilemap is created; however, you can still append base-textures without changing previously added tiles after it is instantiated.

The following example paints a static tilemap from a CraftPix tileset.

The texture passed to tile() must be one of the atlases in the tilemap’s tileset. Otherwise, the tilemap will silently drop the tile. As we’ll discuss later on, CompositeTilemap can be used to get around this limitation.

Animated Tiles

The options passed to tile let you animate the rendered tile between different tile textures stored in the same base-texture. The different frames must be located uniformly in a table (or a single row/column).

The texture you pass to tile will be the first frame. Then the following parameters specify how Tilemap will find other frames:

  • animX: The x-offset between frame textures.
  • animY: The y-offset between frames.
  • animCountX: The number of frame textures per row of the table. This is 1 by default.
  • animCountY: The number of frames per column of the table. This is 1 by default.

If your frames are all in a row, you don’t need to specify animY and animCountY.

The animation frame vector (tileAnim) specifies which frame to use for all tiles in the tilemap. tileAnim[0] specifies the column modulo, and tileAnim[1] specifies the row modulo. Since it wraps around when the column/row is fully animated, you don’t have to do it yourself.

The above example takes advantage of the fact that some regular doors and wide doors are placed in a row in the sample atlas. animX, animCountX are used to animate between them every 500ms.

Tileset limitations

Tilemap renders all of the tilemap in one draw call. It doesn’t intermediate batches of geometry like PixiJS’ Graphics. All the tileset base-textures are bound to the GPU together.

This means that there’s a limit to how many tile sprite sheets you can use in each tilemap. WebGL 1 guarantees that at least 8 base-textures can be used together; however, most devices support 16. You can check this limit by evaluating

If your tileset contains more base-textures than this limit, Tilemap will silently fail to render its scene.

If you’re using only one sprite sheet like the examples above, you don’t need to worry about hitting this limit. Otherwise, CompositeTilemap is here to help.


A “tilemap composite” will layer tilesets into multiple tilemaps. You don’t need to predefine the base-textures you’re going to use. Instead, it will try to find a tilemap with the same base-texture in its tileset when you add a tile; if none exists, the base-texture is added into a layered tilemap’s tileset. New tilemaps are automatically created when existing ones fill up.

In most cases, you can trivially swap usage of Tilemap with CompositeTilemap. However, you have to be careful about z-ordering. The tiles using textures in later tilemaps will always render above. This may become a problem with overlapping tiles in some cases.

The following example uses a CompositeTilemap to render one of the previous examples. Instead of using a separate Sprite for the background, it adds the background itself as a tile too.

Tilemap rendering


Tilemap internally stores tiles in a geometry buffer, which contains interleaved data for each vertex.

  • Position (in local space)
  • Texture coordinates
  • Texture frame of the tile
  • Animation parameters (specially encoded into a 32-bit 2-vector)
  • Texture index (into the tileset)

This buffer is mostly static and is lazily updated whenever the tiles are modified between rendering ticks. If the tilemap is left unchanged, the geometry is used directly from graphics memory.


The TileRenderer plugin holds the shared tilemap shader.

The vertex program decodes the animation parameters, calculates and passes the texture frame, texture coordinates, and texture index to the fragment program. The animation frame vector is passed as a uniform. 

Then, the fragment program samples the texel from the appropriate texture and outputs the pixel.


@pixi/tilemap’s settings (which is discussed further on) contains a property called TEXTILE_UNITS. This is the number of tile base-texture that are “sewn” together when uploaded to the GPU. You can use this to increase the tileset limit per texture.

The “combined” texture is called a textile. The textile is divided into a 2-column table of square slots. Each slot is a square of size TEXTILE_DIMEN. Your tileset base-textures must be smaller than this dimension for the textile to work.

The following demonstration shows what a textile looks like when uploaded. The textile-tile dimension was set to 256 so that images aren’t spread out too far (it is 1024 by default). 


@pixi/tilemap exports a “settings” object that you should configure before a tilemap is created.

  • TEXTURES_PER_TILEMAP: This is the limit of tile base-textures kept in each layer tilemap of a composite. Once the last tilemap is filled to this limit, the next texture will go into a new tilemap.
  • TEXTILE_DIMEN, TEXTILE_UNITS: Used to configure textiles. If TEXTILE_UNITS is set to 1 (the default), textiles are not used.
  • TEXTILE_SCALE_MODE: Used to set the scaling mode of the resulting textile textures.
  • use32bitIndex: This option enables tilemaps’ rendering with more than 16K tiles (64K vertices).
  • DO_CLEAR: This configures whether textile slots are cleared before the tile textures are uploaded. You can disable this if tile textures “fully” cover TEXTILE_DIMEN and leave no space for a garbage background to develop.

Canvas support

@pixi/tilemap has a canvas fallback, although it is significantly slower. In the future, I might spin out a @pixi/canvas-tilemap to make this fallback optional.