PixiJS – Pal's Blog

Do not use uncompressed textures

Texture uploading

The standard way to upload a texture to graphics hardware in WebGL is to use texImage2D. It comes from the OpenGL ES glTexImage2D function, which accepts no native multimedia formats but rather a buffer of raw pixel data. That means the WebGL implementation abstracts away an inefficient process of uncompressing images before uploading them to graphics memory.

This under-the-hood decompression takes up CPU cycles when an application loads its assets. Not only that, but uncompressed textures are memory inefficient; for context, a 2048×2048 8-bit RBGA texture takes up a minimum of 16mb graphics texture.

It’s easy to load up many textures and hog hundreds of megabytes in graphics memory. When the GPU memory limit is hit, desktop computers will generally start paging memory to disk, and that causes system-wide graphics pauses lasting for seconds. On mobile devices, the OS kills applications using too much memory immediately to prevent any slowdowns. I’ve noticed iOS Safari will reload the page if it takes too much graphics memory, not excluding the memory used by the browser to render HTML.

Texture compression

I recommend using GPU-compressed texture formats to reduce the resources consumed by an application’s textures. This form of texture compression is designed specifically for hardware-accelerated graphics programs. GPU-compressed formats generally use “block compression”, where blocks of pixels are compressed into a single datapoint. I dive into how you can use them in PixiJS at the end.

The WebGL API provides support for GPU-compressed texture formats through various extensions. A GPU-compressed texture does not get decompressed before being uploaded to graphics memory. Instead, the GPU decompresses texels on-the-fly in hardware when a shader reads the texture. Texture compression is designed to allow “random access” reads that don’t require the GPU to decompress the whole texture to retrieve a single texel.

By using compressed texture formats, a WebGL application can free up CPU cycles spent by the browser decompressing images and reduce the video memory occupied by textures. The latter is especially important on low-end devices or when your application is using a lot of textures.

I built a tool, Zap, to let you generate GPU-compressed textures with no setup!


GPU-compressed textures come with their caveats, like all good things in life!

One trade-off is that GPU-compressed textures take up more disk space than native images, like PNGs, which use more sophisticated compression algorithms. This means downloading a bunch of compressed textures can increasing asset loading time over the network.

GPU-compressed formats are also very platform-dependent. Most graphics cards support only one or two of the several texture compression formats. This is because the decoding algorithm has to be built into the hardware. And so, having a fallback to native images is still necessary; maintaining images in several different compressed formats is also a bunch of homework. PixiJS’ @pixi/compressed-textures allows an application to serve a texture manifest that lists the available formats; then, the texture loader picks the appropriate version based on the device.


“Supercompressed” texture formats like Basis solve the disk size and platform-dependency problems of GPU-compressed formats. A supercompressed texture is an intermediate format served over the network and then transcoded into a supported GPU-compressed format on the client. Basis provides a transcoder build that can be invoked from JavaScript. As a fallback, the Basis transcoder also allows decoding to an uncompressed 16-bit RGB texture.

The basis transcoder is an additional ~500kb of JavaScript / WebAssembly code (without gzip compression). Fetching and executing it adds a tiny overhead when initializing, but that should be worth it if you use more than a megabyte of images. The Basis supercompressed format is still smaller than native formats like PNG, so you might actually save download time on average.

Testing how much GPU memory is saved

If you’ve kept reading to this point, you might be thinking about how to know if using compressed textures is worth it?

I made two sample apps that load the same 600x400px texture 100 times, one using uncompressed textures, and the other using compressed textures. A small canvas is used to reduce the framebuffer’s effect on memory usage. I used PixiJS because PixiJS 6’s @pixi/compressed-textures has out-of-the-box support for compressed formats!

You can open the sample apps in Chrome and open the browser task manager. Note that you might have to wait up to 30 seconds for them to load because Replit seems to throttle the image downlink. To view the GPU memory of each process, you’ll need to enable that column.

The uncompressed sample (above) takes 100mb of GPU memory.

While the compressed sample takes only 30mb − that’s 70% less hogged memory! PixiJS also has to create an HTMLImageElement for each native image texture, and you can see that also affects the main memory usage.

Of course, the trade-off is in the 4-5xed download size of the textures (6mb vs. 25mb). As I mentioned earlier, if you’re downloading more than a megabyte of textures − it’s worth using supercompression to save bandwidth.

PixiJS 6’s @pixi/basis adds support for using Basis textures. To test Basis, I generated a Basis texture from Zap and plugged it into this sample.

The results are similar to that of the compressed texture sample; in this case, PixiJS chose a more compact compressed format (DXT1) than the one I uploaded in the prior sample (DXT5) so GPU memory usage has further decreased.

Moreover, this sample fetches all textures in just 1.7mb of network usage!

Notice the “dedicated worker” row in the task manager. @pixi/basis creates a pool of workers for transcoding so the application UI does not slow down.

Try it out using Zap

Zap is a tool I built to help you get started with texture compression. Traditional tools like Compressonator, NVIDIA’s Texture Tools, PVRTexTool are clunky, OS-specific, and have a steep learning curve. I literally had to install Windows to test out Compressonator, and it was really slow.

Zap is a simple web app that lets you upload a texture to be processed by my server. It supports 10 different compression formats plus the UASTC basis supercompression format. Not only that, it’s free to use (for now 😀).

To use Zap, simply upload a native image and select the compression formats you want. That will redirect you to a permanent link, at which the compressed textures will be available after processing. It may take several seconds on larger images. Note the compressed textures will be deleted after a few days.

PixiJS Compressed Textures Support

PixiJS 6 supports most of the GPU-compressed formats out of the box (exception being BC7). You can use them just like you use native images.

To use the Basis format, you need to import BasisLoader from @pixi/basis, load the transcoder, and register the loader. Then, the PixiJS Loader API can be used in the standard manner:

// Include the following script, if not using ESM
// <script src="https://cdn.jsdelivr.net/npm/@pixi/basis@6.2.2/dist/browser/basis.min.js"></script>

// Load transcoder from JSDeliver
const BASIS_TRANSCODER_JS_URL = 'https://cdn.jsdelivr.net/npm/@pixi/basis@6.2.2/assets/basis_transcoder.js';
const BASIS_TRANSCODER_WASM_URL = 'https://cdn.jsdelivr.net/npm/@pixi/basis@6.2.2/assets/basis_transcoder.wasm';

// Without this, PixiJS can't decompress *.basis files!

// Make sure the BasisLoader is being used!

// Usage:
    .add("your-file.basis", "your-file.basis")
    .load((_, resources) => {
       // Use this texture!
       const texture = resources['your-file.basis'];

Hey there, I’m Shukant and I’m building the future of work at Teamflow, the best virtual office for remote companies. Thanks for visiting my site!


PixiJS Picture Kit

Ivan Popleyshev has been working hard upgrading the libraries he has authored to PixiJS 6. The next one up is PixiJS Picture Kit – a shader-based implementation for blending modes that WebGL doesn’t actively support. Apart from blending, the “backdrop” texture it exposes can be used for other kinds of filtering.

Blend Modes

This section goes over blend modes and how they work in WebGL.

The blend mode defines how colors output by the fragment shader are combined with the framebuffer’s colors. In more simple terms, a blend mode is used to mix the colors of two objects where they overlap. Below, blend modes supported in PixiJS are shown by rendering a semi-transparent blue square over a red circle:

A showcase of all the blends modes available in PixiJS. In the 4th column here are the blend modes PixiJS Picture adds. Click on the image to edit the code!

The normal blend mode makes the source color, blue, appear in the foreground over the destination color, red, in the background.

Porter Duff operations

The blend modes in the 2nd and 3rd columns have suffixes OVER, IN, OUT, and ATOP describing Porter Duff operations. These represent image compositing operations.

  • OVER – colors are mixed for the whole object being rendered
  • IN – only overlapping areas are rendered
  • OUT – colors are outputted only in non-overlapping areas
  • ATOP – colors are outputted only over existing content

In PixiJS, the blend modes only apply to pixels in the object being rendered, so the compositing operations look a bit different. For example, SRC_IN and SRC_ATOP look the same when. An actual IN operation would erase non-overlapping areas in the red circle. But since PixiJS only applies the blend mode in the blue square’s area, this is not possible with blending.

The blend modes with prefix DST switch which color is in the foreground. Even though the blue squares are rendered after the red circles, they are behind with DST blend modes. The DST_OVER blend mode will make a scene appear as if z-indices were reversed.


The blend modes in the 1st column change the arithmetic used to mix the source and destination color.

  • ADD – Sums the source and destination color with equal weighting instead of alpha-weighting
  • SUBTRACT – Subtracts the source color from the destination. Negative values are clipped to zero.
  • MULTIPLY – The colors are multiplied, which always results in darker colors.

Blend equation

The blend equation is a linear function on the source color and destination color that calculates the output color. This equation can be set separately for the RGB and alpha components of colors.

Blend equation

blendFunc is used to set the weights for the source and destination colors. Instead of passing predefined values for these weights, a WebGL constant representing these weights needs to be passed. For example, gl.DST_ALPHA will set the weight to the destination color’s alpha.

For the normal blend mode, you’d use:

gl.blendFunc(gl.ONE, gl.ONE_MINUS_SRC_ALPHA)

blendFuncSeparate can be used to separately set these weighting for the RGB and alpha channel of colors.


blendEquation sets which equation is used to mix the colors after they’ve been multiplied by weights. You can add, subtract, reverse subtract, and even min / max. Most blend modes use the add equation.

blendEquation basically sets the function “f” in the blend equation shown before. These are the different functions you can use.

blendEquationSeparate can be used to separately set which equation is used to mix the RGB and alpha channels of colors.


The StateSystem manages the blend mode for the PixiJS renderer. It works by mapping BLEND_MODES to the parameters for blendFunc and blendEquation described above. If you want to add more blend modes of your own, you can modify the undocumented blendModes map in the state system

blendModes basically maps each blend-mode to a list of parameters to blendFuncSeparate and blendEquationSeparate. These lists can have up to 8 parameters but only the first two are required. The ADD equation is used by default.

import { BLEND_MODES } from 'pixi.js';

const stateSystem = renderer.state;


// Add OVERWRITE blend mode and set it to a unique value!

// This blend mode will basically overwrite the destination
// color with the source color. The destination has zero
// weight in the output.
renderer.state.blendModes[BLEND_MODES.OVERWRITE] = [

In the above snippet, I create a OVERWRITE blend mode that will make the background disappear wherever an object is rendered and only keep its pixels in the framebuffer.

Custom blending with shaders

The “fixed function” blend modes shown so far are perhaps limited. The normal blend mode is by far used for alpha compositing. The other Porter Duff variants can be used for masking. To make more complicated and artistic blend modes, a shader that samples the “source” and “destination” colors from textures is used.

PixiJS Picture implements this kind of shader as a filter. The source object is rendered into a filter texture. The destination pixels from the framebuffer or canvas are copied into a backdrop texture. The dimensions of these textures must be the same.

// Fragment shader

// Texture coordinates for pixels to be blended.
varying vec2 vTextureCoord;

// Filter texture with source colors
uniform sampler2D uSampler;

// Backdrop texture with destination colors
uniform sampler2D uBackdrop;

This type of filter is called a BlendFilter. The fragment shader in BlendFilter is a template:

void main(void)
   vec2 backdropCoord = vec2(vTextureCoord.x, uBackdrop_flipY.x + uBackdrop_flipY.y * vTextureCoord.y);
   vec4 b_src = texture2D(uSampler, vTextureCoord);
   vec4 b_dest = texture2D(uBackdrop, backdropCoord);
   vec4 b_res = b_dest;


   gl_FragColor = b_res;
  • b_src is the source color sampled from the filter texture
  • b_dest is the destination color sampled from the backdrop texture
  • b_res is the output color calculated from the source and destination colors by the blending code. It’s set to the destination color by default.

When a BlendFilter is constructed, the %blendCode% token is replaced by the blending code, which calculates the output color. This way multiple blend modes can be implemented by just writing the blending code for each one. You can find examples of these shader parts in the source code. To emulate the normal blend mode, the blending code would look something like this:

// Note: b_src and b_dest are premultiplied by alpha like
// all other colors in PixiJS.

b_res = b_src.a + (1 - b_src.a) * b_dest;

To use this code as a blend filter, you can construct a BlendFilter and apply it to a display object.

import { BlendFilter } from '@pixi/picture';

// Blending code for normal blend mode
    b_res = b_src.a + (1 - b_src.a) * b_dest;

// Create globally shared instance of blend filter. This is a
// good optimization if you're going to use the filter on multiple
// objects.
const normalBlendFilter = new BlendFilter({
    blendCode: NORMAL_SHADER_FULL,

// Apply the filter on the source object
sourceObject.filters = [normalBlendFilter];

Built-in blend filters

PixiJS Picture implements filters for these blend modes:


getBlendFilter maps each blend mode to instances of their blend filters, which can be applied on a display object to emulate the blend mode.

import { BLEND_MODES } from 'pixi.js';

sourceObject.filters = [getBlendFilter(BLEND_MODES.OVERLAY)];

PixiJS Picture also exports special versions of Sprite and TilingSprite where you can set the blendMode directly and a blend filter is implicitly applied:

// Note: Use the Sprite exported from @pixi/picture and
// not the default one from pixi.js!
import { Sprite } from '@pixi/picture';

// Set the blendMode on the sprite directly. When the
// sprite renders, it will use the blend filter from
// getBlendFilter() automatically.
Sprite.blendMode = BLEND_MODES.OVERLAY;


The dissolve blend mode randomly chooses pixels from the source or destination texture to output. The likelihood of choosing the source color is equal to its alpha, i.e. a 0.5 alpha means half of the output pixels will come from the top layer and the rest will be from the bottom layer. In this mode, colors aren’t truly “mixed”.

(left) The dissolve blend mode vs (right) the normal blend mode. Click to see the example live!

The blending code for this is really simple:

// Noise function that generates a random number between 0 and 1
float rand = fract(sin(dot(
    vTextureCoord.xy ,vec2(12.9898,78.233))) * 43758.5453);

if (rand < b_src.a) {
    b_res = b_src;

The famous one-liner rand is used to generate a random number between 0 and 1. If this random variable is less than the alpha of the source color, then the resulting color is set equal to the source color. Otherwise, the resulting color is set to the destination color (by default).

A BlendFilter is needed to use it:

import { BlendFilter } from '@pixi/picture';

// Copy blendingCode
    // Noise function that generates a random number between 0 and 1
    float rand = fract(sin(dot(
        vTextureCoord.xy ,vec2(12.9898,78.233))) * 43758.5453);

    if (rand < b_src.a) {
        b_res = b_src;

// Create blend filter
const dissolveBlendFilter = new BlendFilter({
    blendCode: DISSOLVE_FULL,

// Apply it!
sourceObject.filters = [dissolveBlendFilter];

You can also augment BLEND_MODES and create a DISSOLVE blend mode. The blendFullArray exported from @pixi/picture contains the blending code for each mode – the dissolve code needs to be added as well.

import { BLEND_MODES } from 'pixi.js';
import { Sprite, blendFullArray } from '@pixi/picture';

// Any non-conflicting number high enough works here!

// Register the blending code with @pixi/picture

// Set it on a PixiJS Picture sprite
new Sprite().blendMode = BLEND_MODES.DISSOLVE;

Now, you can set the blendMode to dissolve directly on a PixiJS Picture sprite.

Backdrop filters

Blend filters use the backdrop texture to read the destination color. If you have imported @pixi/picture, you can use the backdrop in other filters as well!

PixiJS Picture augments the filter system so that it copies pixels from framebuffer / canvas into the backdrop texture before a filter stack is applied. This backdrop texture is then available to filters as a uniform. The name of the uniform is configured by the backdropUniformName property. For BlendFilter, this is set to uBackdrop.

import { BackdropFilter } from '@pixi/picture';

const fragmentSrc = `
    // The filter texture containing the object being rendered
    uniform sampler uSampler;

    // The backdrop texture
    uniform sampler2D uBackdrop;

    // TODO: Your shader code

class CustomBackdropFilter extends BackdropFilter {
    constructor() {
        super(/* vertexSrc, fragmentSrc */);
        // Set the backdropUniformName so the backdrop
        // texture is available to shader code.
        this.backdropUniformName = 'uBackdrop';

Magnifying glasses

Ivan shows how you can use the backdrop texture with his “magnifying glasses” example:

Click to see the example live!

The grass background is rendered first. The two “lens” sprites are then rendered with a “displacement” filter. The lens texture is a displacement map – each texel encodes how much each pixel must be displaced.

The displacement texture

The R channel holds the displacement in the x-direction and the G channels holds it for the y-direction. The (R, G) values are centered at (0.5, 0.5) and then scaled by a certain magnitude.

x = (r - 0.5) * scale;
y = (g - 0.5) * scale;

The centering is done because color values must be between 0 and 1, and displacements can have negative values.

The displacement filter samples the “lens” texture and calculates the displacement vector for the current pixel. It then samples the backdrop texture by adding this displacement to the passed texture coordinates:

// Read the displacement data from the lens texture
vec4 map =  texture2D(uSampler, vTextureCoord);

// Map it into the displacement vector
map -= 0.5;
map.xy *= scale * inputSize.zw;

// Add the displacement vector to the texture coordinates,
// and then clamp it in case it goes outside the
// the backdrop texture.
vec2 dis = clamp(vec2(vTextureCoord.x + map.x, vTextureCoord.y + map.y), inputClamp.xy, inputClamp.zw);

// Handle y-flipping
dis.y = dis.y * backdropSampler_flipY.y + backdropSampler_flipY.x;

// Sample backdrop and output color
gl_FragColor = texture2D(backdropSampler, dis);

Mask filter

The MaskFilter allows you to apply a filter on the backdrop wherever it overlaps with a mask. A common use case is the backdrop blur effect, which can be implemented by simply passing a blur filter to MaskFilter:

// Gaussian blur filter
import { BlurFilter, Graphics } from 'pixi.js';

// Masking filter
import { MaskFilter } from '@pixi/picture';

const mask = new PIXI.Graphics()
  .beginFill(0xffffff, 1)
  .drawRect(0, 0, 100, 100);

mask.filters = [
  new MaskFilter(new BlurFilter()),

The above have the effect of blurring the background in the rectangle (0, 0, 100, 100). The white rectangular mask itself won’t be visible. Instead, another translucent white rectangle must be added so it appears visible.

The backdrop blur example

If you’ve been reading up until here, I’m glad this article was informative. As the software industry goes remote, we all need a new office. Check out Teamflow, a virtual office built for the future.


PixiJS Layers Kit

Ivan Popelyshev recently migrated the PixiJS Layers Kit to be compatible with PixiJS 6. I helped him also document the @pixi/layers API here. This package introduces an interesting concept – making the order in which items in your scene tree render separate from the tree’s hierarchy. It allows you to change the display order of display objects by assigning layers.

Let’s start with a scenario in which this might be helpful – notes that can be dragged using handles that are on top. Each note and its handle is kept in the same container – so they move together; however, they need to be in separate “layers” – one below for the items and one above for the handles. In a conventional scene tree, it would not be possible to have this layering without splitting the notes and their handles into separate containers and setting their positions individually.

But @pixi/layers makes this possible! You can group items in your scene tree and render those groups as layers. These items will only render when their parent layer becomes active during rendering.

// The stage to be rendered. You need to use a PIXI.display.Stage
// so that it correctly resolves each DisplayObject to its layer.
const stage = new PIXI.display.Stage();

// Set the parentGroup on items in your scene tree.
const GROUPS = {
  BG: new PIXI.display.Group(),
  FG: new PIXI.display.Group(),

// These groups are rendered by layers.
stage.addChild(new PIXI.display.Layer(GROUPS.BG));
stage.addChild(new PIXI.display.Layer(GROUPS.FG));

// How to make an item so that the handle is above all 
// other content
function makeItem() {
  const item = new PIXI.Container();
  const handle = item.addChild(new PIXI.Graphics());// Do drawing
  const body = item.addChild(new PIXI.Graphics());// Do drawing

  // Set the group of the handle to foreground
  handle.parentGroup = GROUPS.FG:

  // Set the group of the body to background
  body.parentGroup = GROUPS.BG;

  return item;

This changes the display order of items in the scene tree.

How it works

When you import @pixi/layers, it applies a mixin on PixiJS’ DisplayObject and Renderer. It adds these new behaviors:

  • If the scene root is a Stage, the renderer would now call updateStage – which traverses the scene tree and resolves each item to its group and layer.
  • A DisplayObject will only render when its active parent layer is being rendered (which is resolved in the update-stage step).
  • It also patches the Interaction API’s TreeSearch to correctly hit-test a layer-enhanced scene tree.

In the updating phase, every Group with sorting enabled will sort its display objects by their zOrder. This z-order is different from the built-in z-index that PixiJS provides. Both z-index and z-order are used for sorting objects into the order you want them to be rendered. z-order is the implementation provided by @pixi/tilemap. You can use both of them in conjunction – objects will be sorted by z-index first then by z-order.

When a layer renders, it will set the active layer on the renderer – which then indicates that objects in that layer can now render.

An example of how layering changes the display order

Note that Layer extends Container, so you can add children directly to it like Item 3 in the diagram. You don’t have to set parentGroup or parentLayer on these children as it is implicit.

Z-grouping to reduce sorting overhead

In containers with many children, if only a few z-indices are being used they can be replaced with a fixed number of layers. For example, when users edit a group of items they expect them to come on top. Instead of setting a higher z-index on these items, this can be implemented by promoting items to an “editing” layer. This is much easier than shifting items into another container, which interferes with interaction.

This technique replaces sorting with a linear-time traversal of your scene tree. It should especially be used when your scene tree is large.

Using zOrder

A Group will sort its items by their z-order if the sort option is passed:

import { Group } from '@pixi/layers';

// Group that sorts its items by z-order before rendering
const zSortedGroup = new Group(0, true);

If the z-order of a scene is relatively static, it’s more efficient to disable automatic z-order sorting and invoke it manually:

// Don't sort each frame
group.enableSort = false;

// Instead, call this when z-orders change

Another neat feature is that Group emits a sort event for each item before sorting them. This can be used to dynamically calculate the z-order for each item, which is particularly useful when you want to order items based on their y-coordinate:

// You have to enable sort manually if you don't pass
// "true" or a callback to the constructor.
group.enableSort = true;

// Sort items so that objects below (along y-axis) are on rendered over others
group.on('sort', (item) => {
  item.zOrder = item.y;

Check out this example – here the bunnies are moving back and forth but the bottommost bunnies are rendered over others:

Layers as textures

Layers can be rendered into intermediate render-textures out of the box. The useRenderTexture option enables this:

layer.useRenderTexture = true;
layer.getRenderTexture();// Use this texture to do stuff!

When a layer renders into a texture, you won’t see it on the canvas directly. Instead, its texture must be drawn manually onto the screen. This can be done using a sprite:

// Create a Sprite that will render the layer's texture onto the screen.
const layerRenderingSprite = app.stage.addChild(new PIXI.Sprite(layer.getRenderTexture());

The layer’s texture is resized to the renderer’s screen – so too many layers with textures should not be used. This technique can be used to apply filters on a layer indirectly through the sprite rendering its texture:

// Apply a Gaussian blur on the sprite that indirectly renders the layer
layerRenderingSprite.filters = [new PIXI.filters.BlurFilter()];

By rendering into an intermediate texture, layers can be optimized to re-render when their content changes. You can split your scene tree so that layers are rendered separately – and then use the layer textures in the main scene.

// Main application with its own stage
const app = new PIXI.Application();

// The relatively static scene that is rendered separately. This is not
// directly shown on the canvas - later, a sprite is added
// to render its texture onto the canvas.
const staticStage = new PIXI.display.Stage();
const staticLayer = new PIXI.Layer();

staticLayer.addChild(new ExpensiveMesh());
staticLayer.useRenderTexture = true;

// Add a sprite that renders the static scene's snapshot to the main
// scene.
app.stage.addChild(new PIXI.Sprite(staticLayer.getRenderTexture());

// Rerenders the static scene in the next frame before
// the main scene is rendered onto the canvas. This should be invoked
// whenever the scene needs to be updated.
function rerenderExpensiveMeshNextFrame() {
  app.ticker.addOnce(() => {

Double buffering

A beautiful use of layer textures is for showing trails of moving objects in a scene. The trick is to render the last frame’s texture into the current frame with a lower alpha. By applying an alpha, previous frames quickly decay which ensures only a few frames are seen trailing.

Since WebGL does not allow rendering a texture into itself, double buffering is needed. The layer needs to flip-flops between two textures, one for rendering into and one for rendering from. This can be enabled by useDoubleBuffer:

// Ensure the layer renders into a texture instead of the canvas
layer.useRenderTexture = true;

// Enable double buffering for this layer
layer.useDoubleBuffer = true;

Note that useRenderTexture must be enabled for double buffering – not enabling it will result in the layer rendering directly to the canvas.

Now, since the layer flip-flops between rendering into the two textures, the texture used to render the last frame back into the layer needs to flop-flip. The layer kit does this internally by hot swapping the framebuffer and WebGL texture of getRenderTexture() each frame.

// Create a sprite to render the last frame of the layer
const lastFrameSprite = new PIXI.Sprite(layer.getRenderTexture());

// Apply an alpha so the last frame decays a bit
lastFrameSprite.alpha = 0.6;

// Render the last frame back into the layer
layer.addChild(new PIXI.Sprite(layer.getRenderTexture()));

// Render the layer into the main stage
stage.addChild(new PIXI.Sprite(layer.getRenderTexture()));

In the above snippet, sprites are created using the layer’s texture. When it runs, the sprites are actually flip-flopping between two different textures each frame. See it in action by Ivan’s example:



Thanks to Mat Grove’s work, PixiJS 6.1.0 will ship with support for Uniform Buffer Objects, a WebGL 2 optimization to make uniform uploads faster.


UBOs are handles to GPU buffers that store uniform data. They can be attached to a shader in a single step, without needing to upload each uniform field individually. If you share uniforms between multiple shaders, this can be used to reduce uploads of relatively static data.

Theoretically, you can optimize filters with UBOs. The common uniforms passed to Filter don’t change between them: inputSize, inputPixel, inputClamp, outputFrame.


To use UBOs in a shader, you’ll need to use GLSL 3 ES.

#version 300 es
#define SHADER_NAME Example-Shader

precision highp float;

To migrate an existing GLSL 1 shader (the default), you need to use the in keyword instead of attribute, out instead of varying in the vertex shader, in instead of varying in the fragment shader, and then create an out variable in the fragment shader instead of using gl_FragColor.

You can then move some of your uniforms into a UBO block:

uniform location_data {
  mat3 locationMatrix;

In this example, you can reference the locationMatrix uniform directly.

void main(void) {
  mat3 matrix = locationMatrix;

To upload the UBO uniforms, you need to add an UniformBufferGroup in PixiJS in your shader’s uniform.

import { Shader, UniformBufferGroup } from 'pixi.js';

Shader.from(vertexSrc, fragmentSrc, {
    location_data: UniformBufferGroup.uboFrom({
        locationMatrix: new Matrix().toArray(),

UniformBufferGroup.uboFrom creates a “static” UBO. If you ever change a value in it, you’ll need to update it.

Here’s an example that applies a gradient color to a texture using a UBO:

When should I use UBOs?

UBOs are useful if you have multiple shaders that share static uniform data. If your uniforms are dynamic and change very often, UBOs will not be much of an optimization.


Federated Events API

PixiJS 6.1.0 will ship with an experimental UI event infrastructure that provides a much more robust and DOM-compatible solution than the incumbent interaction plugin. This change made it through PixiJS’ RFC 7071 and merged in PR 7213.

I named it the “Federated Events API.” It’s federated because you can create multiple event boundaries and override their logic for parts of your scene. Each event boundary only controls events for the scene below them – not unlike a federation.


I developed the Federated Events API to overcome the two significant limitations in the Interaction API –

  • DOM Incompatibility
  • Extensibility

Apart from these API-facing issues, we also needed to refactor the implementation to make it more maintainable.

DOM Incompatibility

The Interaction API had a synthetic InteractionEvent that didn’t overlap with DOM’s PointerEvent well enough. If your UI shared DOM elements, then event handlers had to still be specific to PixiJS or the DOM.

The Federated Events API brings multiple events that inherit their DOM counterparts. This means your event handlers are agnostic to whether they’re looking at a DOM or PixiJS event. DisplayObject now also has addEventListener and removeEventListener methods.

The semantics of some interaction events diverged from those of the Pointer Events API.

  • pointerover and pointerout events didn’t bubble up to their common ancestor.
  • pointerenter and pointerleave events were missing.
  • pointermove events would fire throughout the scene graph, instead of just on hovered display object.

This gets corrected in this new API!

Another important addition is the capture phase for event propagation. The new API’s event propagation matches that of the DOM.


The Interaction API’s implementation was very brittle, and overriding any detail meant hell. The rigid architecture also means that customizing interaction for a part of your scene was impossible.

This new API lets you override specific details of the event infrastructure. That includes:

  • optimizing hit testing (spatial hash acceleration?)
  • adding custom events (focus, anyone?)
  • modifying event coordinates (handy if you’re using projections)

The API also lets you mount event boundaries at specific parts of your scene graph to override events for display objects underneath it.

Other improvements


The EventSystem is the main point of contact for federated events. Adding it to your renderer will register the system’s event listeners, and once it renders – the API will propagate FederatedEvents to your scene. The EventSystem‘s job is to normalize native DOM events into FederatedEvents and pass them to the rootBoundary. It’s just a thin wrapper with a bit of configuration & cursor handling on top.

The EventBoundary object holds the API’s core functionality – taking an upstream event, translating it into a downstream event, and then propagating it. The translation is implemented as an “event mapping” – listeners are registered for handling specific upstream event types and are responsible for translating and propagating the corresponding downstream events. This mapping isn’t always one-to-one; the default mappings are as follows:

  • pointerdownpointerdown
  • pointermovepointerout, pointerleave, pointermove, pointerover, pointerenter
  • pointeruppointerup
  • pointeroutpointerout, pointerleave
  • pointeroverpointerover, pointerenter
  • wheelwheel

This list doesn’t include the mouse- and touch-specific events that are emitted too.


An event boundary can search through and propagate events throughout a connected scene graph, which would be connected by the parent-child relationships.

In certain cases, however, you may want to “hide” the implementation scene for an object. @pixi-essentials/svg does this to prevent your scene from being dominated by SVG rendering nodes. Instead of holding the nodes below as children, you place them in a root container and render it separately.

// Crude anatomy of a disconnected scene
class HiddenScene {
  root: Container;
  render(renderer) {

This poses a problem when you want interactivity to still flow through this “point of disconnection”. Here, an additional event boundary that accepts upstream events and propagating them through root can fix this! See the nested boundary example at the end for how.


Basic usage

Since the Federated Events API won’t be production-ready until PixiJS 7, it’s not enabled by default. To use it, you’ll have to delete the interaction plugin and install the EventSystem manually. If you’re using a custom bundle, you can remove the @pixi/interaction module too.

import { EventSystem } from '@pixi/events';
import { Renderer } from '@pixi/core';// or pixi.js

delete Renderer.__plugins.interaction;

// Assuming your renderer is at "app.renderer"
if (!('events' in app.renderer)) {
    app.renderer.addSystem(EventSystem, 'events');


Let’s start with this barebones example – handling clicks on a display object. Just like the Interaction API, you need to mark it interactive and add a listener.

// Enable interactivity for this specific object. This
// means that an event can be fired with this as a target.
object.interactive = true;

// Listen to clicks on this object!
object.addEventListener('click', function onClick() {
    // Make the object bigger each time it's clicked!
    object.scale.set(object.scale.x * 1.1);

A handy tool for checking handling “double” or even “triple” clicks is the event’s detail. The event boundary keeps track of how many clicks have been done each within 200ms of each other. For double clicks, it’ll be set to 2. The following example scales the bunny based on this property – you have to click fast to make the bunny larger!


Dragging is done slightly differently with the new API – you have to register the pointermove handler on the stage, not the dragged object. Otherwise, if the pointer moves out of the selected DisplayObject, it’ll stop getting pointermove events (to emulate the InteractionManager’s behavior – enable moveOnAll in the root boundary).

The upside is much better performance and mirroring of the DOM’s semantics.

function onDragStart(e) {
    selectedTarget = e.target;

    // Start listening to dragging on the stage
    app.stage.addEventListener('pointermove', onDragMove);

function onDragMove(e) {
    // Don't use e.target because the pointer might
    // move out of the bunny if the user drags fast,
    // which would make e.target become the stage.
        e.global, null, selectedTarget.position);


The wheel event is available to use just like any other! You can move your display object by the event’s deltaY to implement scrolling. This example does that for a slider’s handle.

Right now, wheel events are implemented as “passive” listeners. That means you can’t do preventDefault() to block the browser from scrolling other content; this means you should only use it on fullscreen canvas apps.

slider.addEventListener('wheel', onWheel);

Advanced use-cases

Manual hit-testing

To override a specific part of event handling, you can inherit from EventBoundary and set the event system’s rootBoundary!

Here’s an example that uses a SpatialHash to accelerate hit-testing. A special HashedContainer holds a spatial hash for its children, and that is used to search through them instead of a brute force loop.

This technique is particularly useful for horizontal scene graphs, where a few containers hold most of the display objects as children.

Nested boundaries

The ultimate example: how you can use a nested EventBoundary in your scene graph. As mentioned before, this is useful when you have a disconnected scene graph and you want events to propagate over points of disconnection.

To forward events from upstream, you make the “subscene” interactive, listen to all the relevant events, and map them into the event boundary below. The event boundary should be attached to the content of your scene. It’s like implementing a stripped down version of the EventSystem.

// Override copyMouseData to apply inverse worldTransform 
// on global coords
this.boundary.copyMouseData = (from, to) => {
    // Apply default implementation first
        .call(this.boundary, from, to);

    // Then bring global coords into content's world
        to.global, to.global);

// Propagate these events down into the content's
// scene graph!
].forEach((event) => {
        (e) => this.boundary.mapEvent(e),

To make the cursor on internal objects work too, you should expose the event boundary’s cursor property on the subscene.

get cursor() {
    return this.boundary.cursor;

PixiJS Tilemap Kit 3

In my effort to bring tighter integration to the PixiJS ecosystem, I’m upgrading external PixiJS packages and working towards lifting them to the standard of the main project. @pixi/tilemap 3 is the first package in this process. Yes, I’ve republished pixi-tilemap as @pixi/tilemap.

Here, I want to cover the new, leaner API that @pixi/tilemap 3 brings to the table. This package by Ivan Popleyshev gives you an optimized rectangular tilemap implementation you can use to render a background for your game or app composed of tile textures. The documentation is available at https://api.pixijs.io/@pixi/tilemap.html.


A tileset is the set of tile textures used to build the scene. Generally, you’d want the tileset to be in one big base-texture to reduce the number of network requests and improve rendering batch efficiency.

To use @pixi/tilemap, you’ll first need to export a tileset atlas as a sprite sheet. PixiJS’ spritesheet loader populates your tile textures from the sheet’s manifest. If you don’t have one at hand, you can create a sample tileset as follows:

  • Download this freebie tileset from CraftPix.net here: https://craftpix.net/download/24818/. You’ll need to signup, however.
  • Download and install TexturePacker: https://www.codeandweb.com/texturepacker
  • Drag the “PNG” folder of the downloaded tileset into TexturePacker. It will automatically pack all the tiles into one big atlas image.
  • Then click on “Publish sprite sheet” and save the manifest.
The generate tileset should look like this!


The Tilemap class renders a tilemap from a predefined set of base-textures containing the tile textures. Each rendered tile references its base-texture by an index into the tileset array. This tileset array is first passed when the tilemap is created; however, you can still append base-textures without changing previously added tiles after it is instantiated.

The following example paints a static tilemap from a CraftPix tileset.

The texture passed to tile() must be one of the atlases in the tilemap’s tileset. Otherwise, the tilemap will silently drop the tile. As we’ll discuss later on, CompositeTilemap can be used to get around this limitation.

Animated Tiles

The options passed to tile let you animate the rendered tile between different tile textures stored in the same base-texture. The different frames must be located uniformly in a table (or a single row/column).

The texture you pass to tile will be the first frame. Then the following parameters specify how Tilemap will find other frames:

  • animX: The x-offset between frame textures.
  • animY: The y-offset between frames.
  • animCountX: The number of frame textures per row of the table. This is 1 by default.
  • animCountY: The number of frames per column of the table. This is 1 by default.

If your frames are all in a row, you don’t need to specify animY and animCountY.

The animation frame vector (tileAnim) specifies which frame to use for all tiles in the tilemap. tileAnim[0] specifies the column modulo, and tileAnim[1] specifies the row modulo. Since it wraps around when the column/row is fully animated, you don’t have to do it yourself.

The above example takes advantage of the fact that some regular doors and wide doors are placed in a row in the sample atlas. animX, animCountX are used to animate between them every 500ms.

Tileset limitations

Tilemap renders all of the tilemap in one draw call. It doesn’t intermediate batches of geometry like PixiJS’ Graphics. All the tileset base-textures are bound to the GPU together.

This means that there’s a limit to how many tile sprite sheets you can use in each tilemap. WebGL 1 guarantees that at least 8 base-textures can be used together; however, most devices support 16. You can check this limit by evaluating


If your tileset contains more base-textures than this limit, Tilemap will silently fail to render its scene.

If you’re using only one sprite sheet like the examples above, you don’t need to worry about hitting this limit. Otherwise, CompositeTilemap is here to help.


A “tilemap composite” will layer tilesets into multiple tilemaps. You don’t need to predefine the base-textures you’re going to use. Instead, it will try to find a tilemap with the same base-texture in its tileset when you add a tile; if none exists, the base-texture is added into a layered tilemap’s tileset. New tilemaps are automatically created when existing ones fill up.

In most cases, you can trivially swap usage of Tilemap with CompositeTilemap. However, you have to be careful about z-ordering. The tiles using textures in later tilemaps will always render above. This may become a problem with overlapping tiles in some cases.

The following example uses a CompositeTilemap to render one of the previous examples. Instead of using a separate Sprite for the background, it adds the background itself as a tile too.

Tilemap rendering


Tilemap internally stores tiles in a geometry buffer, which contains interleaved data for each vertex.

  • Position (in local space)
  • Texture coordinates
  • Texture frame of the tile
  • Animation parameters (specially encoded into a 32-bit 2-vector)
  • Texture index (into the tileset)

This buffer is mostly static and is lazily updated whenever the tiles are modified between rendering ticks. If the tilemap is left unchanged, the geometry is used directly from graphics memory.


The TileRenderer plugin holds the shared tilemap shader.

The vertex program decodes the animation parameters, calculates and passes the texture frame, texture coordinates, and texture index to the fragment program. The animation frame vector is passed as a uniform. 

Then, the fragment program samples the texel from the appropriate texture and outputs the pixel.


@pixi/tilemap’s settings (which is discussed further on) contains a property called TEXTILE_UNITS. This is the number of tile base-texture that are “sewn” together when uploaded to the GPU. You can use this to increase the tileset limit per texture.

The “combined” texture is called a textile. The textile is divided into a 2-column table of square slots. Each slot is a square of size TEXTILE_DIMEN. Your tileset base-textures must be smaller than this dimension for the textile to work.

The following demonstration shows what a textile looks like when uploaded. The textile-tile dimension was set to 256 so that images aren’t spread out too far (it is 1024 by default). 


@pixi/tilemap exports a “settings” object that you should configure before a tilemap is created.

  • TEXTURES_PER_TILEMAP: This is the limit of tile base-textures kept in each layer tilemap of a composite. Once the last tilemap is filled to this limit, the next texture will go into a new tilemap.
  • TEXTILE_DIMEN, TEXTILE_UNITS: Used to configure textiles. If TEXTILE_UNITS is set to 1 (the default), textiles are not used.
  • TEXTILE_SCALE_MODE: Used to set the scaling mode of the resulting textile textures.
  • use32bitIndex: This option enables tilemaps’ rendering with more than 16K tiles (64K vertices).
  • DO_CLEAR: This configures whether textile slots are cleared before the tile textures are uploaded. You can disable this if tile textures “fully” cover TEXTILE_DIMEN and leave no space for a garbage background to develop.

Canvas support

@pixi/tilemap has a canvas fallback, although it is significantly slower. In the future, I might spin out a @pixi/canvas-tilemap to make this fallback optional.