pixijs – Pal's Blog
Categories
PixiJS

Do not use uncompressed textures

Texture uploading

The standard way to upload a texture to graphics hardware in WebGL is to use texImage2D. It comes from the OpenGL ES glTexImage2D function, which accepts no native multimedia formats but rather a buffer of raw pixel data. That means the WebGL implementation abstracts away an inefficient process of uncompressing images before uploading them to graphics memory.

This under-the-hood decompression takes up CPU cycles when an application loads its assets. Not only that, but uncompressed textures are memory inefficient; for context, a 2048×2048 8-bit RBGA texture takes up a minimum of 16mb graphics texture.

It’s easy to load up many textures and hog hundreds of megabytes in graphics memory. When the GPU memory limit is hit, desktop computers will generally start paging memory to disk, and that causes system-wide graphics pauses lasting for seconds. On mobile devices, the OS kills applications using too much memory immediately to prevent any slowdowns. I’ve noticed iOS Safari will reload the page if it takes too much graphics memory, not excluding the memory used by the browser to render HTML.

Texture compression

I recommend using GPU-compressed texture formats to reduce the resources consumed by an application’s textures. This form of texture compression is designed specifically for hardware-accelerated graphics programs. GPU-compressed formats generally use “block compression”, where blocks of pixels are compressed into a single datapoint. I dive into how you can use them in PixiJS at the end.

The WebGL API provides support for GPU-compressed texture formats through various extensions. A GPU-compressed texture does not get decompressed before being uploaded to graphics memory. Instead, the GPU decompresses texels on-the-fly in hardware when a shader reads the texture. Texture compression is designed to allow “random access” reads that don’t require the GPU to decompress the whole texture to retrieve a single texel.

By using compressed texture formats, a WebGL application can free up CPU cycles spent by the browser decompressing images and reduce the video memory occupied by textures. The latter is especially important on low-end devices or when your application is using a lot of textures.

I built a tool, Zap, to let you generate GPU-compressed textures with no setup!

Caveats

GPU-compressed textures come with their caveats, like all good things in life!

One trade-off is that GPU-compressed textures take up more disk space than native images, like PNGs, which use more sophisticated compression algorithms. This means downloading a bunch of compressed textures can increasing asset loading time over the network.

GPU-compressed formats are also very platform-dependent. Most graphics cards support only one or two of the several texture compression formats. This is because the decoding algorithm has to be built into the hardware. And so, having a fallback to native images is still necessary; maintaining images in several different compressed formats is also a bunch of homework. PixiJS’ @pixi/compressed-textures allows an application to serve a texture manifest that lists the available formats; then, the texture loader picks the appropriate version based on the device.

Supercompression

“Supercompressed” texture formats like Basis solve the disk size and platform-dependency problems of GPU-compressed formats. A supercompressed texture is an intermediate format served over the network and then transcoded into a supported GPU-compressed format on the client. Basis provides a transcoder build that can be invoked from JavaScript. As a fallback, the Basis transcoder also allows decoding to an uncompressed 16-bit RGB texture.

The basis transcoder is an additional ~500kb of JavaScript / WebAssembly code (without gzip compression). Fetching and executing it adds a tiny overhead when initializing, but that should be worth it if you use more than a megabyte of images. The Basis supercompressed format is still smaller than native formats like PNG, so you might actually save download time on average.

Testing how much GPU memory is saved

If you’ve kept reading to this point, you might be thinking about how to know if using compressed textures is worth it?

I made two sample apps that load the same 600x400px texture 100 times, one using uncompressed textures, and the other using compressed textures. A small canvas is used to reduce the framebuffer’s effect on memory usage. I used PixiJS because PixiJS 6’s @pixi/compressed-textures has out-of-the-box support for compressed formats!

You can open the sample apps in Chrome and open the browser task manager. Note that you might have to wait up to 30 seconds for them to load because Replit seems to throttle the image downlink. To view the GPU memory of each process, you’ll need to enable that column.

The uncompressed sample (above) takes 100mb of GPU memory.

While the compressed sample takes only 30mb − that’s 70% less hogged memory! PixiJS also has to create an HTMLImageElement for each native image texture, and you can see that also affects the main memory usage.

Of course, the trade-off is in the 4-5xed download size of the textures (6mb vs. 25mb). As I mentioned earlier, if you’re downloading more than a megabyte of textures − it’s worth using supercompression to save bandwidth.

PixiJS 6’s @pixi/basis adds support for using Basis textures. To test Basis, I generated a Basis texture from Zap and plugged it into this sample.

The results are similar to that of the compressed texture sample; in this case, PixiJS chose a more compact compressed format (DXT1) than the one I uploaded in the prior sample (DXT5) so GPU memory usage has further decreased.

Moreover, this sample fetches all textures in just 1.7mb of network usage!

Notice the “dedicated worker” row in the task manager. @pixi/basis creates a pool of workers for transcoding so the application UI does not slow down.

Try it out using Zap

Zap is a tool I built to help you get started with texture compression. Traditional tools like Compressonator, NVIDIA’s Texture Tools, PVRTexTool are clunky, OS-specific, and have a steep learning curve. I literally had to install Windows to test out Compressonator, and it was really slow.

Zap is a simple web app that lets you upload a texture to be processed by my server. It supports 10 different compression formats plus the UASTC basis supercompression format. Not only that, it’s free to use (for now 😀).

To use Zap, simply upload a native image and select the compression formats you want. That will redirect you to a permanent link, at which the compressed textures will be available after processing. It may take several seconds on larger images. Note the compressed textures will be deleted after a few days.

PixiJS Compressed Textures Support

PixiJS 6 supports most of the GPU-compressed formats out of the box (exception being BC7). You can use them just like you use native images.

To use the Basis format, you need to import BasisLoader from @pixi/basis, load the transcoder, and register the loader. Then, the PixiJS Loader API can be used in the standard manner:

// Include the following script, if not using ESM
// <script src="https://cdn.jsdelivr.net/npm/@pixi/basis@6.2.2/dist/browser/basis.min.js"></script>

// Load transcoder from JSDeliver
const BASIS_TRANSCODER_JS_URL = 'https://cdn.jsdelivr.net/npm/@pixi/basis@6.2.2/assets/basis_transcoder.js';
const BASIS_TRANSCODER_WASM_URL = 'https://cdn.jsdelivr.net/npm/@pixi/basis@6.2.2/assets/basis_transcoder.wasm';

// Without this, PixiJS can't decompress *.basis files!
PIXI.BasisLoader.loadTranscoder(
  BASIS_TRANSCODER_JS_URL,
  BASIS_TRANSCODER_WASM_URL,
);

// Make sure the BasisLoader is being used!
PIXI.Loader.registerPlugin(PIXI.BasisLoader);

// Usage:
PIXI.Loader.shared
    .add("your-file.basis", "your-file.basis")
    .load((_, resources) => {
       // Use this texture!
       const texture = resources['your-file.basis'];
     });

Hey there, I’m Shukant and I’m building the future of work at Teamflow, the best virtual office for remote companies. Thanks for visiting my site!

Categories
PixiJS

Federated Events API

PixiJS 6.1.0 will ship with an experimental UI event infrastructure that provides a much more robust and DOM-compatible solution than the incumbent interaction plugin. This change made it through PixiJS’ RFC 7071 and merged in PR 7213.

I named it the “Federated Events API.” It’s federated because you can create multiple event boundaries and override their logic for parts of your scene. Each event boundary only controls events for the scene below them – not unlike a federation.

Context

I developed the Federated Events API to overcome the two significant limitations in the Interaction API –

  • DOM Incompatibility
  • Extensibility

Apart from these API-facing issues, we also needed to refactor the implementation to make it more maintainable.

DOM Incompatibility

The Interaction API had a synthetic InteractionEvent that didn’t overlap with DOM’s PointerEvent well enough. If your UI shared DOM elements, then event handlers had to still be specific to PixiJS or the DOM.

The Federated Events API brings multiple events that inherit their DOM counterparts. This means your event handlers are agnostic to whether they’re looking at a DOM or PixiJS event. DisplayObject now also has addEventListener and removeEventListener methods.

The semantics of some interaction events diverged from those of the Pointer Events API.

  • pointerover and pointerout events didn’t bubble up to their common ancestor.
  • pointerenter and pointerleave events were missing.
  • pointermove events would fire throughout the scene graph, instead of just on hovered display object.

This gets corrected in this new API!

Another important addition is the capture phase for event propagation. The new API’s event propagation matches that of the DOM.

Extensibility

The Interaction API’s implementation was very brittle, and overriding any detail meant hell. The rigid architecture also means that customizing interaction for a part of your scene was impossible.

This new API lets you override specific details of the event infrastructure. That includes:

  • optimizing hit testing (spatial hash acceleration?)
  • adding custom events (focus, anyone?)
  • modifying event coordinates (handy if you’re using projections)

The API also lets you mount event boundaries at specific parts of your scene graph to override events for display objects underneath it.

Other improvements

Architecture

The EventSystem is the main point of contact for federated events. Adding it to your renderer will register the system’s event listeners, and once it renders – the API will propagate FederatedEvents to your scene. The EventSystem‘s job is to normalize native DOM events into FederatedEvents and pass them to the rootBoundary. It’s just a thin wrapper with a bit of configuration & cursor handling on top.

The EventBoundary object holds the API’s core functionality – taking an upstream event, translating it into a downstream event, and then propagating it. The translation is implemented as an “event mapping” – listeners are registered for handling specific upstream event types and are responsible for translating and propagating the corresponding downstream events. This mapping isn’t always one-to-one; the default mappings are as follows:

  • pointerdownpointerdown
  • pointermovepointerout, pointerleave, pointermove, pointerover, pointerenter
  • pointeruppointerup
  • pointeroutpointerout, pointerleave
  • pointeroverpointerover, pointerenter
  • wheelwheel

This list doesn’t include the mouse- and touch-specific events that are emitted too.

Federation

An event boundary can search through and propagate events throughout a connected scene graph, which would be connected by the parent-child relationships.

In certain cases, however, you may want to “hide” the implementation scene for an object. @pixi-essentials/svg does this to prevent your scene from being dominated by SVG rendering nodes. Instead of holding the nodes below as children, you place them in a root container and render it separately.

// Crude anatomy of a disconnected scene
class HiddenScene {
  root: Container;
  
  render(renderer) {
    renderer.render(this.root);
  }
}

This poses a problem when you want interactivity to still flow through this “point of disconnection”. Here, an additional event boundary that accepts upstream events and propagating them through root can fix this! See the nested boundary example at the end for how.

Examples

Basic usage

Since the Federated Events API won’t be production-ready until PixiJS 7, it’s not enabled by default. To use it, you’ll have to delete the interaction plugin and install the EventSystem manually. If you’re using a custom bundle, you can remove the @pixi/interaction module too.

import { EventSystem } from '@pixi/events';
import { Renderer } from '@pixi/core';// or pixi.js

delete Renderer.__plugins.interaction;

// Assuming your renderer is at "app.renderer"
if (!('events' in app.renderer)) {
    app.renderer.addSystem(EventSystem, 'events');
}

Clicks

Let’s start with this barebones example – handling clicks on a display object. Just like the Interaction API, you need to mark it interactive and add a listener.

// Enable interactivity for this specific object. This
// means that an event can be fired with this as a target.
object.interactive = true;

// Listen to clicks on this object!
object.addEventListener('click', function onClick() {
    // Make the object bigger each time it's clicked!
    object.scale.set(object.scale.x * 1.1);
});

A handy tool for checking handling “double” or even “triple” clicks is the event’s detail. The event boundary keeps track of how many clicks have been done each within 200ms of each other. For double clicks, it’ll be set to 2. The following example scales the bunny based on this property – you have to click fast to make the bunny larger!

Dragging

Dragging is done slightly differently with the new API – you have to register the pointermove handler on the stage, not the dragged object. Otherwise, if the pointer moves out of the selected DisplayObject, it’ll stop getting pointermove events (to emulate the InteractionManager’s behavior – enable moveOnAll in the root boundary).

The upside is much better performance and mirroring of the DOM’s semantics.

function onDragStart(e) {
    selectedTarget = e.target;

    // Start listening to dragging on the stage
    app.stage.addEventListener('pointermove', onDragMove);
}

function onDragMove(e) {
    // Don't use e.target because the pointer might
    // move out of the bunny if the user drags fast,
    // which would make e.target become the stage.
    selectedTarget.parent.toLocal(
        e.global, null, selectedTarget.position);
}

Wheel

The wheel event is available to use just like any other! You can move your display object by the event’s deltaY to implement scrolling. This example does that for a slider’s handle.

Right now, wheel events are implemented as “passive” listeners. That means you can’t do preventDefault() to block the browser from scrolling other content; this means you should only use it on fullscreen canvas apps.

slider.addEventListener('wheel', onWheel);

Advanced use-cases

Manual hit-testing

To override a specific part of event handling, you can inherit from EventBoundary and set the event system’s rootBoundary!

Here’s an example that uses a SpatialHash to accelerate hit-testing. A special HashedContainer holds a spatial hash for its children, and that is used to search through them instead of a brute force loop.

This technique is particularly useful for horizontal scene graphs, where a few containers hold most of the display objects as children.

Nested boundaries

The ultimate example: how you can use a nested EventBoundary in your scene graph. As mentioned before, this is useful when you have a disconnected scene graph and you want events to propagate over points of disconnection.

To forward events from upstream, you make the “subscene” interactive, listen to all the relevant events, and map them into the event boundary below. The event boundary should be attached to the content of your scene. It’s like implementing a stripped down version of the EventSystem.

// Override copyMouseData to apply inverse worldTransform 
// on global coords
this.boundary.copyMouseData = (from, to) => {
    // Apply default implementation first
    PIXI.EventBoundary.prototype
        .copyMouseData
        .call(this.boundary, from, to);

    // Then bring global coords into content's world
    this.worldTransform.applyInverse(
        to.global, to.global);
};

// Propagate these events down into the content's
// scene graph!
[
    'pointerdown',
    'pointerup',
    'pointermove',
    'pointerover',
    'pointerout',
    'wheel',
].forEach((event) => {
    this.addEventListener(
        event, 
        (e) => this.boundary.mapEvent(e),
    );
});

To make the cursor on internal objects work too, you should expose the event boundary’s cursor property on the subscene.

get cursor() {
    return this.boundary.cursor;
}
Categories
PixiJS

PixiJS Tilemap Kit 3

In my effort to bring tighter integration to the PixiJS ecosystem, I’m upgrading external PixiJS packages and working towards lifting them to the standard of the main project. @pixi/tilemap 3 is the first package in this process. Yes, I’ve republished pixi-tilemap as @pixi/tilemap.

Here, I want to cover the new, leaner API that @pixi/tilemap 3 brings to the table. This package by Ivan Popleyshev gives you an optimized rectangular tilemap implementation you can use to render a background for your game or app composed of tile textures. The documentation is available at https://api.pixijs.io/@pixi/tilemap.html.

Tilesets

A tileset is the set of tile textures used to build the scene. Generally, you’d want the tileset to be in one big base-texture to reduce the number of network requests and improve rendering batch efficiency.

To use @pixi/tilemap, you’ll first need to export a tileset atlas as a sprite sheet. PixiJS’ spritesheet loader populates your tile textures from the sheet’s manifest. If you don’t have one at hand, you can create a sample tileset as follows:

  • Download this freebie tileset from CraftPix.net here: https://craftpix.net/download/24818/. You’ll need to signup, however.
  • Download and install TexturePacker: https://www.codeandweb.com/texturepacker
  • Drag the “PNG” folder of the downloaded tileset into TexturePacker. It will automatically pack all the tiles into one big atlas image.
  • Then click on “Publish sprite sheet” and save the manifest.
The generate tileset should look like this!

Tilemap

The Tilemap class renders a tilemap from a predefined set of base-textures containing the tile textures. Each rendered tile references its base-texture by an index into the tileset array. This tileset array is first passed when the tilemap is created; however, you can still append base-textures without changing previously added tiles after it is instantiated.

The following example paints a static tilemap from a CraftPix tileset.

The texture passed to tile() must be one of the atlases in the tilemap’s tileset. Otherwise, the tilemap will silently drop the tile. As we’ll discuss later on, CompositeTilemap can be used to get around this limitation.

Animated Tiles

The options passed to tile let you animate the rendered tile between different tile textures stored in the same base-texture. The different frames must be located uniformly in a table (or a single row/column).

The texture you pass to tile will be the first frame. Then the following parameters specify how Tilemap will find other frames:

  • animX: The x-offset between frame textures.
  • animY: The y-offset between frames.
  • animCountX: The number of frame textures per row of the table. This is 1 by default.
  • animCountY: The number of frames per column of the table. This is 1 by default.

If your frames are all in a row, you don’t need to specify animY and animCountY.

The animation frame vector (tileAnim) specifies which frame to use for all tiles in the tilemap. tileAnim[0] specifies the column modulo, and tileAnim[1] specifies the row modulo. Since it wraps around when the column/row is fully animated, you don’t have to do it yourself.

The above example takes advantage of the fact that some regular doors and wide doors are placed in a row in the sample atlas. animX, animCountX are used to animate between them every 500ms.

Tileset limitations

Tilemap renders all of the tilemap in one draw call. It doesn’t intermediate batches of geometry like PixiJS’ Graphics. All the tileset base-textures are bound to the GPU together.

This means that there’s a limit to how many tile sprite sheets you can use in each tilemap. WebGL 1 guarantees that at least 8 base-textures can be used together; however, most devices support 16. You can check this limit by evaluating

renderer.gl.getParameter(gl.MAX_TEXTURE_IMAGE_UNITS)

If your tileset contains more base-textures than this limit, Tilemap will silently fail to render its scene.

If you’re using only one sprite sheet like the examples above, you don’t need to worry about hitting this limit. Otherwise, CompositeTilemap is here to help.

CompositeTilemap

A “tilemap composite” will layer tilesets into multiple tilemaps. You don’t need to predefine the base-textures you’re going to use. Instead, it will try to find a tilemap with the same base-texture in its tileset when you add a tile; if none exists, the base-texture is added into a layered tilemap’s tileset. New tilemaps are automatically created when existing ones fill up.

In most cases, you can trivially swap usage of Tilemap with CompositeTilemap. However, you have to be careful about z-ordering. The tiles using textures in later tilemaps will always render above. This may become a problem with overlapping tiles in some cases.

The following example uses a CompositeTilemap to render one of the previous examples. Instead of using a separate Sprite for the background, it adds the background itself as a tile too.

Tilemap rendering

Geometry

Tilemap internally stores tiles in a geometry buffer, which contains interleaved data for each vertex.

  • Position (in local space)
  • Texture coordinates
  • Texture frame of the tile
  • Animation parameters (specially encoded into a 32-bit 2-vector)
  • Texture index (into the tileset)

This buffer is mostly static and is lazily updated whenever the tiles are modified between rendering ticks. If the tilemap is left unchanged, the geometry is used directly from graphics memory.

Shader

The TileRenderer plugin holds the shared tilemap shader.

The vertex program decodes the animation parameters, calculates and passes the texture frame, texture coordinates, and texture index to the fragment program. The animation frame vector is passed as a uniform. 

Then, the fragment program samples the texel from the appropriate texture and outputs the pixel.

Textiles

@pixi/tilemap’s settings (which is discussed further on) contains a property called TEXTILE_UNITS. This is the number of tile base-texture that are “sewn” together when uploaded to the GPU. You can use this to increase the tileset limit per texture.

The “combined” texture is called a textile. The textile is divided into a 2-column table of square slots. Each slot is a square of size TEXTILE_DIMEN. Your tileset base-textures must be smaller than this dimension for the textile to work.

The following demonstration shows what a textile looks like when uploaded. The textile-tile dimension was set to 256 so that images aren’t spread out too far (it is 1024 by default). 

Settings

@pixi/tilemap exports a “settings” object that you should configure before a tilemap is created.

  • TEXTURES_PER_TILEMAP: This is the limit of tile base-textures kept in each layer tilemap of a composite. Once the last tilemap is filled to this limit, the next texture will go into a new tilemap.
  • TEXTILE_DIMEN, TEXTILE_UNITS: Used to configure textiles. If TEXTILE_UNITS is set to 1 (the default), textiles are not used.
  • TEXTILE_SCALE_MODE: Used to set the scaling mode of the resulting textile textures.
  • use32bitIndex: This option enables tilemaps’ rendering with more than 16K tiles (64K vertices).
  • DO_CLEAR: This configures whether textile slots are cleared before the tile textures are uploaded. You can disable this if tile textures “fully” cover TEXTILE_DIMEN and leave no space for a garbage background to develop.

Canvas support

@pixi/tilemap has a canvas fallback, although it is significantly slower. In the future, I might spin out a @pixi/canvas-tilemap to make this fallback optional.