keywords: Graphics, Texture Streaming Notes

Documents

Quoted from:
https://www.gamedev.net/forums/topic/698332-what-is-texture-streaming/

Like ChuckNovice explained, it’s the concept of streaming texture data in and out of GPU memory based on which textures are necessary to render the current viewpoint. The basic idea is that you only need high-resolution textures when the camera is close to a surface that uses the texture, so you can drop the higher-resolution mip levels as you get further away. So for instance if you have a 2048x2048 texture, as you get further away you would drop down to 1024x1024, and then 512x512, and so-on.

Generally there’s two main parts of this that can be tricky. The first is the actual streaming infrastructure for streaming texture data off disk (possible decompressing), and then making sure that they find their way into GPU memory using whichever graphics API you’re currently utilizing. You usually want to stream as fast as you can, but you also want to make sure that you don’t use too much CPU/GPU resources since that can interfere with the game’s performance. The other hard part is actually determining which mip level should be used for each texture. There’s several possible avenues for doing this, with different tradeoffs on accuracy, runtime performance, and the ability for things to move around at runtime. Many games with mostly-static worlds will chunk up their scenes into discrete regions, and pre-compute the required texture mip levels for each region. Then as the camera moves through the world from one region to the next, the engine’s streaming systems moves textures in and out of memory based on the current visible set for that region. You can see this presentation about Titanfall 2 for an example of this approach. Other games will try to compute the required texture set on the fly, possible by reading back information from the GPU itself. RAGE took this approach, where they rendered out ID’s to a low-resolution render target and read that back on the CPU to feed into their streaming system. In their case they were streaming in pages for their virtual texture system, but the basic concept is the same for normal texturing. More details on that here. The upside of that approach is that it handled fully dynamic geometry, and gave rather accurate results for the current viewpoint. The downside was that it was always at least a frame behind the current camera movement, and couldn’t predict very quick camera motions. So if you spun the camera around 180 degrees really quickly, you might see some textures slowly pop in as they get streamed off disk. In their case the issue was made worse by their choice of have totally unique texture pages for the whole world, but you could still have the same problem with traditional textures.

Reference

What is Texture Streaming
https://www.gamedev.net/forums/topic/615368-what-is-texture-streaming/

Channel Packing
http://wiki.polycount.com/wiki/ChannelPacking

Presentations

id Tech 5 Challenges From Texture Virtualization to Massive Parallelization
http://mrl.cs.vsb.cz/people/gaura/agu/05-JP_id_Tech_5_Challenges.pdf

Texture Streaming in Titanfall 2
https://www.gdcvault.com/play/1024418/Efficient-Texture-Streaming-in-Titanfall


“You are what you do, not what you say you'll do.” ― Carl Gustav Jung