Skip to content

Add spark.js support to three.js 3DTilesRenderer#1497

Draft
castano wants to merge 6 commits intoNASA-AMMOS:masterfrom
Ludicon:spark
Draft

Add spark.js support to three.js 3DTilesRenderer#1497
castano wants to merge 6 commits intoNASA-AMMOS:masterfrom
Ludicon:spark

Conversation

@castano
Copy link
Copy Markdown

@castano castano commented Mar 5, 2026

This is just a proof of concept to demonstrate spark.js integration in the 3DTilesRenderer. For more details see this blog post:

https://www.ludicon.com/castano/blog/2026/03/announcing-spark-js-0-1/

@gkjohnson
Copy link
Copy Markdown
Contributor

Hello! Can you provide some more context for this PR?

@castano
Copy link
Copy Markdown
Author

castano commented Mar 6, 2026

This is just a proof of concept to demonstrate spark.js integration in the 3DTilesRenderer. You can find more details in this blog post:

https://www.ludicon.com/castano/blog/2026/03/announcing-spark-js-0-1/

If there's interest in considering this for integration I'd be happy to to add more details and submit it for review.

Copy link
Copy Markdown
Contributor

@gkjohnson gkjohnson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can find more details in this blog post

Thanks, this is helpful. Just to make sure I understand: it sounds like spark allows for textures to be transcoded at run time to memory-efficient, GPU-compatible formats, allowing for formats like webp (small on disk footprint) to be used for download while gpu-optimized formats are used at run time (small in-memory footprint), is that right?

I see there are screenshots showing increased geometric detail with spark, implying reduced memory usage so more tiles can fit in the cache, but it would be helpful to see a before / after table detailing comparisons of on-disk size, in-memory size, and additional parse time required due to transcoding from the library so the tradeoffs are clear.

Comment on lines +12 to +17
// IC: The code below assumes tex was created from an ImageBitmap
if ( tex instanceof ExternalTexture && tex.userData?.byteLength ) {

return tex.userData.byteLength;

}
Copy link
Copy Markdown
Contributor

@gkjohnson gkjohnson Mar 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens if an "ExternalTexture" is provided without userData.byteLength? It looks like "image" will be "null" on the Texture, then and the follow code will crash, right? Assuming we can't get the actual texture size from the ExternalTexture handle we should figure out a reasonable default behavior when byteLength isn't provided.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code will crash with ExternalTextures generated by other tools. It also won't compute the size of CompressedTextures correctly, as the generateMipmaps parameter is forced to false, but textures may still have mipmaps. I'd be happy to robustify this code. I was aiming for minimal changes here.

Copy link
Copy Markdown
Contributor

@gkjohnson gkjohnson Mar 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Making it more robust would be great. I expect it's not possible to determine the actual in-memory size of one of these textures from just the WebGLTexture (or WebGPU) handle itself, right? At least not without the gl contenxt. In lieu of any other information being available for calculating the size I was thinking we could check for a custom field (as you are here) and otherwise fallback to a fixed memory size of a 64x64 texture or something just so the value isn't "0" in the cache.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll propose that change in a separate PR. Here I just wanted to keep it simple to demonstrate that the changes required for integration are minimal.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would be great. A PR to improve handling of just ExternalTextures is something we could merge immediately. A couple other things that come to mind that need to be handled:

  • How is disposal handled for these external textures? Does calling ExternalTexture.dispose actually dispose of the WebGLTexture / WebGPUTexture handle? Or does disposal need to happen separately, as with an imageBitmap?
  • The UnloadTilesPlugin is designed to delete any GPU memory for tiles that aren't actively visible or being rendered. When the tiles are made visible again the textures and geometry are re-initialized on the GPU. I assume this model wouldn't work for the ExternalTexture since re-initializing it isn't as simple as just reuploading the existing data? The easy thing to do in that case is to just ignore and not dispose ExternalTextures encountered in the plugin.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I just submitted #1560

My understanding is that external textures are disposed just like standard textures do. No difference there.

The external texture does not keep track of the source data, so it's not possible to re-initialize it easily. It should be possible to have the external texture track the source image and creation options to allow that, but that would require the plugin to be aware of spark.js external textures.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My understanding is that external textures are disposed just like standard textures do. No difference there.

I think this should be validated. Looking through the code it looks like the "dispose" callback which ultimately disposes of the internal texture handle is registered in "initTexture" (called from "updloadTexture"), which never gets called if using "ExternalTexture". It's likely that the application is responsible for disposing of the WebGLTexture handle, similar to "image bitmap". Note that closing image bitmaps is explicitly handled here and here in the project, so we may have to do something similar here.

The external texture does not keep track of the source data, so it's not possible to re-initialize it easily. It should be possible to have the external texture track the source image and creation options to allow that, but that would require the plugin to be aware of spark.js external textures.

I think if we handle the textures like image bitmap this should be okay.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right. I think this should take care of it: Ludicon/spark.js#31

I'll be traveling for the next few days, but I'll test it more thoroughly and merge when I return.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This may work for use within this 3d tiles project but I don't think it exactly lines up with expectations around texture disposal within three.js.

In three multiple textures can share the same "source" (the external webgl texture in this case), which three.js will use reference counting to determine whether to actually remove the content from the GPU. And in common cases users may dispose a texture to remove the content from the GPU only to reuse it later (this project includes a plugin that does exactly that) but given the nature of the texture handle isn't something that can be simply reuploaded.

This is a similar issue to what we have to ImageBitmap because once that data handle has been closed it can't be reuploaded. This is why it's the users responsibility to call "close" when the data is definitely finished with even when the image is loaded via something like GLTFLoader.

It seems like one of the issues in this case is that in order to release the WebGLTexture handle we need access to the associated WebGL context, which may not always be easily available (as is the case here). If the context were stored on the ExternalTexture itself this would help, though. I'm wondering if storing the associated context on the texture is something three.js would support? I'm trying to think through other clean solutions, but this is a bit complicated. WebGPU seems to make this easier since you can just call "destroy" on the handle itself.

Comment thread src/three/renderer/utils/MemoryUtils.js
Comment thread src/three/plugins/GLTFExtensionsPlugin.js Outdated
@castano
Copy link
Copy Markdown
Author

castano commented Mar 8, 2026

Thanks, this is helpful. Just to make sure I understand: it sounds like spark allows for textures to be transcoded at run time to memory-efficient, GPU-compatible formats at run time, allowing for formats like webp (small on disk footprint) to be used for download while gpu-optimized formats are used at run time (small in-memory footprint), is that right?

That's exactly right.

I see there are screenshots showing increased geometric detail with spark, implying reduced memory usage so more tiles can fit in the cache, but it would be helpful to see a before / after table detailing comparisons of on-disk size, in-memory size, and additional parse time required due to transcoding from the library so the tradeoffs are clear.

The overhead of spark.js is fairly low, because transcoding happens on the GPU and the codecs are extremely fast. The bottleneck is usually in the image decoding on the CPU, which is orders of magnitude slower.

I've provided some numbers in previous blog posts, for example, here's a size comparison of the sponza scene:

https://www.ludicon.com/castano/blog/2026/02/an-updated-sponza-gltf/

On that test, enabling or disabling spark did not affect the loading time in a perceptible way.

Here are some more numbers from an earlier release:

https://www.ludicon.com/castano/blog/2025/09/three-js-spark-js/

I should note that spark compression will increase loading time, because the renderer will load many more tiles, so it will increase bandwidth use, but if that's a concern it's possible to control that with the errorTarget parameter.

@gkjohnson
Copy link
Copy Markdown
Contributor

gkjohnson commented Mar 9, 2026

I should note that spark compression will increase loading time, because the renderer will load many more tiles, so it will increase bandwidth use, but if that's a concern it's possible to control that with the errorTarget parameter.

The max memory limit in the LRUCache shouldn't really typically be limiting the tiles that the renderer is determining to be needed unless "errorTarget" is specifically being picked to create the issue. I assume you were reducing the "errorTarget" value (or lru cache memory cap) to a point where there amount of tiles loaded was being limited by memory. This shouldn't be a typical problem, though - noting that it's still a benefit to have more memory overhead where possible. Am I misunderstanding?

@castano
Copy link
Copy Markdown
Author

castano commented Mar 10, 2026

The max memory limit in the LRUCache shouldn't really typically be limiting the tiles that the renderer is determining to be needed unless "errorTarget" is specifically being picked to create the issue. I assume you were reducing the "errorTarget" value (or lru cache memory cap) to a point where there amount of tiles loaded was being limited by memory. This shouldn't be a typical problem, though - noting that it's still a benefit to have more memory overhead where possible. Am I misunderstanding?

That's not what I see on my end. At errorTarget=16 I often see loading limited by the memory cap. With spark.js the cap is usually not reached, which is why the results have greater detail. Lowering the errorTarget certainly makes the difference more stark.

@castano
Copy link
Copy Markdown
Author

castano commented Mar 10, 2026

For example, in the following screenshot memory caps at 275 MB:
Screenshot 2026-03-10 at 1 27 19 PM

With spark enabled, memory use goes down to 113 MB, even though detail is noticeably higher:
Screenshot 2026-03-10 at 1 27 02 PM

@gkjohnson
Copy link
Copy Markdown
Contributor

That's not what I see on my end. At errorTarget=16 I often see loading limited by the memory cap.

You're right, I was misremembering. I'm not reaching the cap with the default 20 error target in the demo but it does reach it with 16. With the updated load strategy ("TilesRenderer.optimizedLoadStrategy = true", which will be changed to the default at some point) the number of tiles is reduced by ~20% in the Google tiles case but I agree more overhead is always better.

castano and others added 2 commits March 16, 2026 18:00
Update googleMapsAerial example to enable spark through the plugins argument.
Require spark 0.1.3
Comment thread package.json
Comment on lines -123 to +124
"three": ">=0.167.0"
"three": ">=0.182.0"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We shouldn't change the peer dependency requirements for the project

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll need to double check whether the required three.js features are all present in that older version, and if they don't gracefully handle the error. It may just work, since most of the three.js changes were in the WebGPU backend, and to add better support for normal maps, which these demos don't need.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Spark.js is not a dependency or requirement of this project so the peer dependency should not change - users can use and install 3d-tiles-renderer with r167+ just fine. Spark.js will need to specify it's own three.js dependency limit so if users install Spark that peer dependency version will need to be respected.

Comment thread example/three/googleMapsAerial.js
Comment thread example/three/googleMapsAerial.js
@gkjohnson
Copy link
Copy Markdown
Contributor

@castano somewhat offtopic for this PR but is it possible for spark.js to transcode a "Canvas" element or "SVG" element used for a texture to a more memory efficient GPU format? Some of the plugins are using Canvas to draw vector graphics (like for GeoJSON or other formats) or compose multiple tiled image textures so we don't have a traditional image handle to process in these cases but it affording memory improvements would be nice.

@castano
Copy link
Copy Markdown
Author

castano commented Mar 30, 2026

somewhat offtopic for this PR but is it possible for spark.js to transcode a "Canvas" element or "SVG" element used for a texture to a more memory efficient GPU format? Some of the plugins are using Canvas to draw vector graphics (like for GeoJSON or other formats) or compose multiple tiled image textures so we don't have a traditional image handle to process in these cases but it affording memory improvements would be nice.

Canvas and SVG elements should work fine, but I have not tested that code path under WebGL. You can find a
WebGPU SVG example in the spark.js SDK:

https://github.com/Ludicon/spark.js/blob/b01b03fd4cd19d70a0c3ea977e348c416e6700c8/examples/svg.html

Examples are automatically published at:

https://ludicon.github.io/spark.js/

I'll look into testing the canvas and SVG code paths under WebGL, but even if it doesn't work now, it shouldn't be hard to support that.

If you can point me to the examples that could benefit from that support I can take a look and ensure it works for those use cases.

@gkjohnson
Copy link
Copy Markdown
Contributor

gkjohnson commented Mar 31, 2026

Canvas and SVG elements should work fine, but I have not tested that code path under WebGL. You can find a
WebGPU SVG example in the spark.js SDK:

https://github.com/Ludicon/spark.js/blob/b01b03fd4cd19d70a0c3ea977e348c416e6700c8/examples/svg.html

It looks like this is passing the SVG path as a string but I expect it will work just fine if you pass the SVG as an image element, as well:

const svg = new Image();
svg.src = './assets/Tiger.svg';

const texture = await spark.encodeTexture( svg, { format: "rgba" } );

--

Regarding canvas, though - it looks like the "encodeTexture" function does not take an HTMLCanvasElement (or OffscreenCanvas) as an argument, which is why I ask:

source (string | HTMLImageElement | ImageBitmap | GPUtexture)
The image to encode. Can be a GPUTexture, URL, DOM image or ImageBitmap.

The "ImageOverlayPlugins", which is used in "quantMeshOverlays" demo, composes tiled textures for overlays. I'll have to think through how something like spark can be integrated, though.

@castano
Copy link
Copy Markdown
Author

castano commented Apr 14, 2026

Regarding canvas, though - it looks like the "encodeTexture" function does not take an HTMLCanvasElement (or OffscreenCanvas) as an argument

Right. HTMLCanvasElement and OffscreenCanvas did actually work already, but did not want to document that since I had never tested it. I've updated the docs and added an example to validate that code path:

Ludicon/spark.js#30

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants