We researched several well-known libraries, but couldn't find a pre-existing solution fitting our use case. Our requirements were WebGL2 support (modern features like MRT and multi-pass), WebGPU support (for future compatibility), a pipeline-centric API surface (not a scene graph), and a permissive license.
The new engine encapsulates three layers:
1) Effects, which are JS or JSON objects defining the shader passes, parameters, and textures.
2) High-level composition DSL with program state abstraction. The running program can be represented as text which can be round-tripped to/from the UI controls. The program state binds the editable params to a GPU-resident graph.
3) Canvas renderer (demo: https://noisemaker.app/demo/shaders/) designed for arbitrarily chaining effects. Noisemaker's effects collection covers noise, particles, distortions, patterns, color, blending, lighting, stateful simulations. The renderer supports WebGL2 or WebGPU, and the effects target pixel-level parity across each backend. The engine supports WebGPU compute, but our own shaders follow GPGPU patterns for consistency with WebGL2.
It takes minimal code to integrate the rendering pipeline. Assuming a canvas element somewhere on the page, this example runs an animated noise effect:
const SHADER_CDN = 'https://shaders.noisedeck.app/1'
const { CanvasRenderer } = await import(`${SHADER_CDN}/noisemaker-shaders-core.esm.min.js`)
const renderer = new CanvasRenderer({
canvas: document.getElementById('canvas'),
width: 1024, height: 1024,
basePath: SHADER_CDN,
useBundles: true,
bundlePath: `${SHADER_CDN}/effects`
})
await renderer.loadManifest()
await renderer.loadEffect('synth/noise')
// DSL program to create a shader graph. "search" is an effect namespace directive.
await renderer.compile(`
search synth
noise().write(o0)
render(o0)
`)
renderer.start()
I'll do my best to address any feedback or questions you have about the project. I'd love to discuss where it fits in the creative coding landscape relative to other libraries.