Three.js From Zero · Article s11-10

S11-10 Spark.js

Season 11 · Article 10

Spark.js — the 2026 Splat Renderer Choice

The Gaussian-splat library landscape churned in 2025. The library most tutorials target — @mkkellogg/gaussian-splats-3d — is dormant. The recommended successor is Spark, a Three.js-native WebGPU renderer from World Labs. Here's what changed and how to migrate.

1. One-paragraph splat refresher

Gaussian splatting replaces meshes with millions of fuzzy 3D ellipsoids — each carrying position, anisotropic shape, color, opacity, and (optionally) view-dependent spherical harmonics. The render loop sorts them back-to-front, projects each to screen, and alpha-blends. Result: photoreal real-world capture, no mesh, fuzzy surfaces (hair, foliage, glass) handled natively. S10-02 covered the math. This article is about the libraries.

2. The 2026 library landscape — what changed

librarymaintainerstatus (April 28, 2026)backend
@mkkellogg/gaussian-splats-3dMark KelloggDORMANT — last release Jan 2025. README points users to Spark.WebGL, CPU sort
@lumaai/luma-webLuma LabsSLOWED — examples repo archived April 2025. SDK still distributed.WebGL
gsplat.js (HuggingFace)Dylan Ebertactive — last release July 2025WebGL
@sparkjsdev/sparkWorld LabsACTIVE — 2.0 preview docs are live, with breaking changes from 0.1; pin your exact versionThree.js plugin, WebGPU-aware
SuperSplat (PlayCanvas)PlayCanvas teamactive editor (not a runtime renderer)WebGL + WebGPU
If a tutorial older than late 2025 tells you to npm install @mkkellogg/gaussian-splats-3d: read the README. The maintainer himself recommends Spark now. CPU-side sort caused visible artifacts during fast camera moves; WebGPU GPU-side sort fixes them.
Maintenance note, April 28, 2026: Spark's official docs currently frame the 0.1 -> 2.0 jump as a preview-era migration. Treat examples as version-specific and pin the exact Spark release you ship.

3. Why Spark won

  1. Three.js plugin, not a parallel engine. Splats coexist in your scene graph with regular meshes, lights, post-processing. Add a splat the same way you add a Mesh. Mixing splats and meshes was the missing capability of the previous generation.
  2. Format omnivore. Loads PLY, SPLAT, KSPLAT, SPZ, SOGS — every format the splat ecosystem produces. You no longer need a converter pipeline before rendering.
  3. WebGPU sort. The sort is the bottleneck and the artifact source on splats. Doing it on the GPU eliminates the CPU stall and the ghosting on camera fly-throughs.
  4. Dyno DSL. A small expression language for editing splats at runtime — color shift, SDF clipping, opacity ramps, displacement. The selling point for "splats as a procedural primitive."
  5. World Labs is committed. Long-term backing matters for a fast-moving format. mkkellogg's repo was a one-person side project; Spark is the recommended successor with corporate runway.

4. Hello-world Spark with Three.js

This is the actual Spark API. Mix a splat scene with a regular Three.js mesh in the same scene graph:

// npm i three @sparkjsdev/spark
import * as THREE from 'three';
import { SplatMesh, SparkRenderer } from '@sparkjsdev/spark';

const renderer = new THREE.WebGLRenderer({ antialias: true });
const scene = new THREE.Scene();
const camera = new THREE.PerspectiveCamera(45, w/h, 0.1, 100);

// 1. Splat scene from a hosted PLY/SPLAT/SPZ
const splats = new SplatMesh({
  url: 'https://splats.example.com/scan.spz',
});
scene.add(splats);

// 2. Regular Three.js mesh — sits in the splat world
const cube = new THREE.Mesh(
  new THREE.BoxGeometry(1, 1, 1),
  new THREE.MeshStandardMaterial({ color: 0x10b981 })
);
cube.position.set(2, 0, 0);
scene.add(cube);

// 3. Spark needs its own per-frame call (sort + projection)
const spark = new SparkRenderer({ renderer });

function tick() {
  spark.update({ scene, camera });   // sort splats this frame
  renderer.render(scene, camera);
  requestAnimationFrame(tick);
}
tick();

The Dyno DSL — runtime splat edits

Spark exposes per-splat shader inputs as Dyno expressions. You can write small programs that mutate splats per-frame without re-uploading:

import { dyno } from '@sparkjsdev/spark';

splats.objectModifier = dyno({
  // Per-splat: shift color, fade by distance to origin
  rgb: (g) => g.rgb.mul(0.7).add(dyno.vec3(0.3, 0.5, 0.2)),
  opacity: (g) => g.opacity.mul(
    dyno.smoothstep(5.0, 0.5, dyno.length(g.center))
  ),
});

Use it for: depth fade, region clipping, color grading without re-baking the asset, debug visualizations (highlight high-opacity splats), procedural splat sources.

5. Live demo — pseudo-splat in Three.js

The demo below uses a Three.js Points system with a Gaussian fragment shader to simulate what splats look like — same as S10-02. Real splats need Spark + a .spz or .ply file from a Polycam / Postshot / Nerfstudio capture. Code-only Spark API is above; the demo is here for art-direction parity.

6. The "splats + meshes" selling point

This is the capability that flipped the field. Previous splat libraries owned the canvas — splats were the entire scene, no Three.js objects could share it. With Spark a typical production scene looks like:

  • Splat asset = real-world environment (a captured store interior, museum gallery, foliage backdrop).
  • Three.js meshes = swappable product. Watch on a wrist, sneaker on a stand, sofa in a living room.
  • Three.js lights = artistic lighting on the product. Splats themselves are pre-lit (the lighting baked into their colors during capture); meshes need real-time lighting that complements.
  • Post-processing = applied to both. Bloom, tonemapping, vignette wrap the composite.

This is the path Apple, Polestar, and Cartier marketing teams asked for in 2024 and didn't get until Spark shipped.

7. Performance numbers

Order of magnitude on M2 MacBook Air, 1080p, real-world captures:

splat countFPS (mkkellogg WebGL)FPS (Spark, WebGPU)
500k60120
2M35-45 (sort hitches)60-75
5M15-20 (heavy ghosting)40-50
10MOOM / unrenderable20-30

Numbers are approximations from the Spark migration guide and community benchmarks. The WebGPU advantage compounds as splat count grows because the sort scales linearly there but quadratically on CPU.

Mobile reality: an iPhone 13 caps comfortably at ~500k splats. iPhone 15 Pro and recent Pixels handle ~1.5M. Quest 3 sits at 1M for stereo XR. Plan accordingly — capture pipelines should output a high-quality PLY and a mobile-optimized SPZ.

8. File formats — pick the right output

formatsize (1M splats)SH supporttypical use
.ply~250 MBSH 0-3training output, archive
.splat~12 MBSH 0 onlyweb delivery, view-independent
.ksplat~10 MBprogressive SH 0-2mkkellogg legacy
.spz~4-8 MBSH 0-22026 default — Niantic format, smallest, GPU-decode-friendly
.sogs~6 MBSH 0-2compressed grid format, Spark-supported

Pipeline: train → export PLY (lossless) → convert to SPZ for delivery. SuperSplat does the conversion in-browser. Once the artifact is SPZ, every modern splat renderer (including Spark) consumes it directly.

9. Migration: mkkellogg → Spark

If you have a project on the dormant lib, the diff is small:

// Before — @mkkellogg/gaussian-splats-3d
import { Viewer } from '@mkkellogg/gaussian-splats-3d';
const viewer = new Viewer({
  cameraUp: [0, -1, -0.6],
  initialCameraPosition: [0, 0, 5],
});
await viewer.addSplatScene('/scene.splat');
viewer.start();

// After — @sparkjsdev/spark
import * as THREE from 'three';
import { SplatMesh, SparkRenderer } from '@sparkjsdev/spark';

const renderer = new THREE.WebGLRenderer({ antialias: true });
const scene = new THREE.Scene();
const camera = new THREE.PerspectiveCamera(45, w/h, 0.1, 100);
camera.position.set(0, 0, 5);
camera.up.set(0, -1, -0.6);   // same up vector handling

const splats = new SplatMesh({ url: '/scene.splat' });
scene.add(splats);

const spark = new SparkRenderer({ renderer });
function tick() {
  spark.update({ scene, camera });
  renderer.render(scene, camera);
  requestAnimationFrame(tick);
}
tick();

Watch out for these:

  • Spark owns its own scene if you let it. Don't double-render. Pass your existing Three.js scene; let Spark contribute the splat sort and let renderer.render finish.
  • Spark 0.1 → 2.0 was a hard break. Tutorials before late 2025 use the old API. Verify against the migration guide on sparkjs.dev.
  • WebGPU baseline. Spark prefers WebGPU. WebGL fallback exists but performance regresses noticeably; warn your users on Safari < 17.

10. Takeaways

  • The mkkellogg lib is dormant. Don't start new projects on it.
  • Spark = Three.js-native, WebGPU sort, splats coexist with meshes.
  • Dyno DSL turns splats into a procedural primitive — runtime color/opacity edits.
  • Pipeline: PLY → SPZ for shipping. Smallest, GPU-friendly.
  • Mobile caps: iPhone 13 ~500k splats, iPhone 15 Pro ~1.5M, Quest 3 ~1M stereo.
  • Migration from mkkellogg is small: replace Viewer with SplatMesh + SparkRenderer.update() in your tick.